NVIDIA GH200 Grace Hopper Superchip

The breakthrough design for giant-scale AI and HPC applications.

Higher Performance and Faster Memory—Massive Bandwidth for Compute Efficiency

The NVIDIA GH200 Grace Hopper Superchip is a breakthrough processor designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.

Take a Closer Look at the Superchip

NVIDIA GH200 Grace Hopper Superchip

The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace and Hopper architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications. With 900 gigabytes per second (GB/s) of coherent interface, the superchip is 7X faster than PCIe Gen5. And with HBM3 and HBM3e GPU memory, it supercharges accelerated computing and generative AI. GH200 runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, the HPC SDK, and Omniverse

The Dual GH200 Grace Hopper Superchip fully connects two GH200 Superchips with NVLink and delivers up to 3.5x more GPU memory capacity and 3x more bandwidth than H100 in a single server.


The NVIDIA GH200 NVL2 fully connects two GH200 Superchips with NVLink, delivering up to 288GB of high-bandwidth memory, 10 terabytes per second (TB/s) of memory bandwidth, and 1.2TB of fast memory. The GH200 NVL2 offers up to 3.5X more GPU memory capacity and 3X more bandwidth than the NVIDIA H100 Tensor Core GPU in a single server for compute- and memory-intensive workloads.

Explore Grace Hopper Reference Design for Modern Data Center Workloads


for AI training, inference, 5G, and HPC.

NVIDIA GH200 Grace Hopper Superchip
NVIDIA BlueField®-3
OEM-defined input/output (IO) and fourth-generation