Intel highlights new Xeons for AI at Hot Chips 2024

Intel showed off new processors and a new interconnect at the Hot Chips 2024 conference taking place at Stanford University. The new Xeon 6 SoC features a high-speed optical interconnect for rapid data movement, a key element of AI processing.

Details about the Xeon 6 have already been announced, but there were some new bits of information disclosed about this specific line, code-named Granite Rapids-D and scheduled to launch during the first half of 2025.

Granite Rapids-D is heavily optimized to scale from edge devices to edge nodes using a single-system architecture and integrated AI acceleration. The Xeon 6 SoC combines the compute chiplet from Intel Xeon 6 processors with an edge-optimized I/O chiplet, which enables the SoC to deliver significant improvements in performance, power efficiency, and transistor density compared to previous technologies.

Granite Rapids-D was built using telemetry from more than 90,000 edge deployments, according to Intel. It features up to 32 lanes of PCI Express 5.0, up to 16 lanes of Compute Express Link (CXL) 2.0, and 2x100G Ethernet.

Among the edge-specific enhancements are extended operating temperature ranges and industrial-class reliability. Other features include new media acceleration capabilities to enhance video transcode and analytics for live OTT, VOD and broadcast media and Advanced Vector Extensions and Advanced Matrix Extensions for improved inferencing performance.

High-speed optical interconnect is at the heart of the processor

Intel’s Integrated Photonics Solutions Group demonstrated its newest and first-ever fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data. As part of a demo at the show, the OCI chiplet was shown to support 64 channels of 32 gigabits-per-second data transmission in each direction on up to 100 meters of fiber optics.

This is promising for future scalability of CPU/GPU cluster connectivity and novel computing architectures, including coherent memory expansion and resource disaggregation in emerging AI infrastructure for data centers and high performance computing (HPC) applications.

Intel also showed off a client product, code-named Lunar Lake, designed for the next generation of AI PCs. Lunar Lake comes with a mix of Performance cores (P-cores) and Efficient-cores (E-cores) as well as a new neural processing unit. It’s up to 4x faster, enabling corresponding improvements in generative AI versus the previous generation, according to Intel.

Lunar Lake also features a new Xe2 graphics processing unit cores to improve gaming and graphics performance by 1.5x over the previous generation.

Read more about Intel

  • Intel launches sixth-generation Xeon processor line
  • Should enterprises be concerned about Intel’s crashing CPU flaw?
  • Extreme taps Intel analytics to boost its AI Expert assistant
  • Intel postpones Innovation event in wake of poor financial results, product problems
  • Intel builds world’s largest neuromorphic system
  • Intel flexes AI chops with Gaudi 3 accelerator, new networking for AI fabrics
  • $15 billion deal with Microsoft boosts Intel’s chipmaking vision

Source:: Network World