Skip to content

Nvidia unveils a new GPU architecture designed for AI data centers

While the rest of the computing industry struggles to get to one exaflop of computing, Nvidia is about to blow past everyone with an 18-exaflop supercomputer powered by a new GPU architecture.

The H100 GPU, has 80 billion transistors (the previous generation, Ampere, had 54 billion) with nearly 5TB/s of external connectivity and support for PCIe Gen5, as well as High Bandwidth Memory 3 (HBM3), enabling 3TB/s of memory bandwidth, the company says. Due out in the third quarter, it’s the first in a new family of GPUs named Hopper after Admiral Grace Hopper who created COBOL and coined the term computer bug.

To read this article in full, please click here

Source:: Network World – Data Center