Site icon GIXtools

AMD unveils new generation of Epyc, Instinct chips

AMD announced its latest AI and high-performance computing processors at its Advancing AI event in San Francisco, including the fifth generation of its Epyc server processors and AMD Instinct MI325X accelerators. Commitments from leading customers and partners, including Microsoft, OpenAI, Meta, Oracle, and Google Cloud, rounded out the event.

The fifth-generation Epyc CPUs come in two distinct configurations, both of which are part of the same 9005 family, also known as Turin. The scale-up version has a new “Zen 5” core architecture optimized for maximum performance, according to AMD. The scale-out models come with dense “Zen 5c” compact cores, a concept first introduced this year with the Zen 4c Bergamo line.

Intel has a similar strategy with its Performance cores and Efficiency cores, but it achieves the E-cores by removing instructions, which can risk breaking apps. AMD got its compact cores by limiting cache size and clock speed, but it kept all the instructions.

The scale-up CPUs come with up to 128 cores and 256 threads and are made with TSMC’s 4nm process. The scale-out Zen 5c version will offer up to 192 cores and 384 threads and is made on TSMC’s 3nm process. The Zen 5-based processors have a max power draw of 500 watts, while the 5c draws 390 watts.

The new chips comes big claims. AMD says its flagship 192-core Epyc 9965 is 2.7 times faster than Intel’s competing Xeon, the Platinum 8952+. AMD claims four times faster video transcoding, 3.9 times faster performance in HPC applications, and up to 1.6 times the performance per core in virtualized environments.

Instinct takes on Nvidia

AMD is seeking a war on two fronts, with Intel and with Nvidia. It’s looking to take on Nvidia in the AI accelerator space with Instinct. While it has a fraction of the business Nvidia has, AMD does have some wins, like the world’s fastest supercomputer Frontier.

To that end, AMD has launched the MI325X with greater memory capacity and bandwidth than the Instinct MI300X, which launched last December. The MI325X is based on the same CDNA 3 GPU architecture, compared with 192GB of HBM3 high-bandwidth memory and 5.3 TB/s in memory bandwidth in the MI300X.

AMD said AI inference performance in the MI325X provides 40% faster throughput with an 8-group, 7-billion-parameter Mixtral model over Nvidia’s top-of-the-line Hopper H200, 30% lower latency with a 7-billion-parameter Mixtral model, and 20% lower latency with a 70-billion-parameter Llama 3.1 model.

AMD is planning an eight-node platform for next year, similar to Nvidia’s DGX Pods. With eight MI325X GPUs connected over AMD’s Infinity Fabric, the platform will offer 2TB of HBM3e memory, 48 TB/s of total memory bandwidth, 20.8 petaflops of FP8 performance, and 10.4 petaflops of FP16 performance, AMD said.

The MI325X will begin shipping in systems from Dell Technologies, Lenovo, Supermicro, Hewlett Packard Enterprise, Gigabyte, and several other server vendors starting in the first quarter of next year, the company said.

Read more processor news

Source:: Network World

Exit mobile version