Broadcom expands AI networking portfolio with Jericho4 Ethernet fabric router

Broadcom has begun shipping its Jericho4 Ethernet fabric router, a platform designed to connect more than one million accelerators, or XPUs, across multiple data centers to support distributed AI workloads, the company said Tuesday.

The chipmaker said Jericho4 delivers higher bandwidth, enhanced security, and lossless performance to overcome scaling limits in AI infrastructure.

Broadcom added that Jericho4 complements its Tomahawk 6 and Tomahawk Ultra chips, offering what it called a complete networking portfolio for high-performance computing (HPC) and AI environments.

According to the company’s product documentation for Jericho4, the device supports low‑power, high‑bandwidth memory (HBM) packet buffering, providing up to 160 times more traffic buffering than on‑chip memory. This, according to Broadcom, enables zero‑packet‑loss performance in heavily congested networks.

“The Jericho4 family is engineered to extend AI-scale Ethernet fabrics beyond individual data centers, supporting congestion-free RoCE and 3.2 Tbps HyperPort for unprecedented interconnect efficiency,” Ram Velaga, senior vice president and general manager of Broadcom’s core switching group, said in a statement.

“Scale Up Ethernet (SUE), Tomahawk Ultra, Tomahawk 6, and Jericho4 all play a very important role in enabling large-scale distributed computing systems within a rack, across racks, and across data centers in an open and interoperable way,” Velaga added.

According to Broadcom, a single Jericho4 system can scale to 36,000 HyperPorts, each running at 3.2 Tbps, with deep buffering, line-rate MACsec encryption, and RoCE transport over distances greater than 100 kilometers.

HBM powers distributed AI

Improving on previous designs, Jericho4’s use of HBM can significantly increase total memory capacity and reduce the power consumed by the memory I/O interface, enabling faster data processing than traditional buffering methods, according to Lian Jie Su, chief analyst at Omdia.

While this may raise costs for data center interconnects, Su said higher-speed data processing and transfer can remove bottlenecks and improve AI workload distribution, increasing utilization of data centers across multiple locations.

“Jericho4 is very different from Jericho3,” Su said. “Jericho4 is designed for long-haul interconnect, while Jericho3 focuses on interconnect within the same data center. As enterprises and cloud service providers roll out more AI data centers across different locations, they need stable interconnects to distribute AI workloads in a highly flexible and reliable manner.”

Others pointed out that Jericho4, built on Taiwan Semiconductor Manufacturing Company’s (TSMC) 3‑nanometer process, increases transistor density to support more ports, integrated memory, and greater power efficiency, features that may be critical for handling large AI workloads.

“It enables unprecedented scalability, making it ideal for coordinating distributed AI processing across expansive GPU farms,” said Manish Rawat, semiconductor analyst at TechInsights. “Integrated HBM facilitates real-time, localized congestion management, removing the need for complex signaling across nodes during high-traffic AI operations. Enhanced on-chip encryption ensures secure inter-data center traffic without compromising performance.”

Rawat said these advances position Jericho4 as more than a high‑performance switch and make it a potential cornerstone for AI network fabrics, including hyper‑mesh topologies and GPU superclusters.

Broadcom’s data center influence

Analysts suggest that Jericho4 could reshape AI‑focused data center designs, particularly for hyperscalers such as Amazon Web Services, Microsoft Azure, and Google Cloud.

Its ability to support long‑haul interconnects of more than 100 km allows compute and storage resources to be separated across city‑scale distances, creating more flexible and resilient regional AI zones.

“The inclusion of HBM marks a shift toward ‘smart switches’, integrating compute, memory buffering, and inline analytics directly into the network fabric,” Rawat said. “Jericho4 also advances security by supporting native encrypted switching, paving the way for zero-trust networking at the hardware level. With congestion-resistant, high-throughput interconnects, it simplifies AI scaling by reducing reliance on complex overlay networks or proprietary fabrics.”

Su pointed out that Broadcom already has significant influence in the data center interconnect space. “The combination of Jericho4 with its other chips, including Tomahawk Ultra and Tomahawk 6, will allow enterprises and hyperscalers to invest in more hyperscale data centers across a wider range of geographical locations,” Su said.

“They need not concentrate AI workloads in a single data center; instead, they can rely on a network of high‑performance data centers to support AI training and inference workloads,” Su added.

Source:: Network World