Hot
NVIDIA H200 Tensor Core GPU
Supercharging AI and HPC workloads.
Higher Performance With Larger, Faster Memory
The NVIDIA H200 Tensor Core GPU supercharges generative AI and highperformance
computing (HPC) workloads with game-changing performance
and memory capabilities.
Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to
offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)—
that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with
1.4X more memory bandwidth. The H200’s larger and faster memory accelerates
generative AI and large language models, while advancing scientific computing for
HPC workloads with better energy efficiency and lower total cost of ownership.
Unlock Insights With High-Performance LLM Inference
In the ever-evolving landscape of AI, businesses rely on large language models to
address a diverse range of inference needs. An AI inference accelerator must deliver the
highest throughput at the lowest TCO when deployed at scale for a massive user base.
The H200 doubles inference performance compared to H100 GPUs when handling
large language models such as Llama2 70B.
Reviews
Clear filtersThere are no reviews yet.