NVIDIA GH200 GPU Boosted With World’s Fastest HBM3e Memory, Delivers 5 TB/s Bandwidth

CoreWeave Manages to Acquire $2.3 Billion Debt By Putting NVIDIA H100 GPUs as "Collateral" 1

NVIDIA has just announced its boosted GH200 GPU which now comes equipped with HBM3e, the world's fastest memory solution.

NVIDIA Equips World's Fastest AI GPU With World's Fastest Memory: Meet GH200 HBM3e Edition!

According to NVIDIA, the Hopper GH200 GPU is now the world's first HBM3e chip which offers not just higher memory bandwidth but also higher memory capacity. A dual Grace Hopper system now offers 3.5x more capacity and 3 times higher bandwidth than the existing offering. The systems can now offer up to 282 GB of HBM3e memory.

HBM3e memory itself offers a 50% faster speed up over the existing HBM3 standard, delivering up to 10 TB/s bandwidth per system and 5 TB/s bandwidth per chip. HBM3e memory will now be used to power a range of GH200-powered systems (400 and counting) which include a diverse variety of combinations of NVIDIA’s latest CPU, GPU, and DPU architectures including Grace, Hopper, Ada Lovelace, and Bluefield to meet the surging demand within the AI segment.

NVIDIA hasn't particularly announced who they will be sourcing the brand new HBM3e memory dies from for use on its GH200 AI GPU but SK Hynix was recently reported to have received the request from NVIDIA to sample its next-gen HBM3e DRAM. Meanwhile, Samsung also has faster HBM3 dies which can offer up to 5 TB/s bandwidth per stack though it seems like SK Hynix might be the choice for GH200 GPUs.

NVIDIA today announced the next-generation NVIDIA GH200 Grace Hopper platform — based on a new Grace Hopper Superchip with the world’s first HBM3e processor — built for the era of accelerated computing and generative AI.

Created to handle the world’s most complex generative AI workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations.

The dual configuration — which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering — comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282GB of the latest HBM3e memory technology.

HBM3e memory, which is 50% faster than current HBM3, delivers a total of 10TB/sec of combined bandwidth, allowing the new platform to run models 3.5x larger than the previous version, while improving performance with 3x faster memory bandwidth.

via NVIDIA

NVIDIA also stated that the availability of the first systems utilizing the Hopper GH200 GPUs with new HBM3e memory technology will be available by Q2 2024 which is slightly later than the AMD Instinct MI300X GPUs which will be carrying similar 5 TB/s+ bandwidth HBM3 dies with up to 192 GB VRAM capacities.

Written by Hassan Mujtaba

Post a Comment

0 Comments