Site icon GIXtools

Arista financials offer glimpse of AI network development

Arista Networks is expecting a potential revenue spike of $1.5 billion from AI-based networking growth through 2025. During the vendor’s third-quarter financial call, Arista CEO Jayshree Ullal said she is “pleasantly surprised with the faster acceleration of AI pilots in 2024.”

Arista posted revenue of $1.811 billion for the quarter ending Sept. 30, an increase of 7.1% compared to the second quarter of 2024, and an increase of 20% from the third quarter of 2023. “We definitely see that our large cloud customers are continuing to refresh on the cloud, but are pivoting very aggressively to AI,” Ullal said.

Arista has between 10 and 15 classic enterprise accounts that are trialing AI networks, but they have a very low number of GPUs involved in the pilots, Ullal said. That’s compared to five hyperscale trials, which Arista expects will grow to 100,000 GPUs and more.

“We are progressing very well in four out of the five pilots. Three of the customers are moving from trials to pilots this year, and we’re expecting those three to become 50 to 200,000 GPU clusters in 2025,” Ullal said. Of the other two pilots, one is just getting started and the other is in what Ullal called a “steady state” that Arista hopes to advance in 2025.

Ullal also noted that with a few exceptions, most of the trials currently are utilizing 400GB Ethernet technology to network GPUs together. “We’re in some early trials on 800GB, but the majority is 400GB for now, as the 800Gb ecosystem is really not here yet. I expect as we go into 2025, we will see a better split between 400GB and 800GB, Ullal said.

Ullal’s statements are in agreement with recent research from the Dell Oro Group.

AI cluster networking speeds are expected to grow from 200/400/800 Gbps today to over 1 Tbps in the near future, according to Sameh Boujelbene, vice president for ethernet switch market research at Dell’Oro Group. 

Dell Oro forecasts that by 2025, the majority of ports in AI networks will be 800 Gbps, and by 2027, the majority of ports will be 1600 Gbps, showing a very fast adoption of the highest speeds available in the market. “This pace of migration is almost twice as fast as what we usually see in the traditional front-end network that is used to connect general-purpose servers,” Boujelbene stated in a recent report.

Arista believes it has a strong, three-pronged approach to grow networking speeds as needed and take advantage of the current growth in AI communications capabilities. Three key products – the Arista 7700 R4 Distributed Etherlink Switch, the 7800 R4 Spine switch, and the 7600X6 Leaf – are all in production and support 800GB as well as 400GB optical links. 

Facebook’s parent company, Meta Platforms, helped develop the 7700 and recently said it would be deploying the Etherlink switch in its Disaggregated Scheduled Fabric (DSF), which features a multi-tier network that supports around 100,000 DPUs, according to reports. The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture.

The 7060X6 AI Leaf switch features Broadcom Tomahawk 5 silicon with a capacity of 51.2 Tbps and support for 64 800G or 128 400G Ethernet ports, and the 7800R4 AI Spine utilizes Broadcom Jericho3-AI processors with an AI-optimized packet pipeline and supports up to 460 Tbps in a single chassis, which corresponds to 576 800G or 1152 400G Ethernet ports.

“This broad range of Ethernet platforms allows our customers to optimize density and minimize tiers to best match the requirements of their AI work,” said John McCool, Arista senior vice president and chief platform officer, during the financial call. 

“Between the 7060 and the 7800, we do see people that are optimizing a mix of both of those products in the same deployment so they can get the minimum number of tiers but have the maximum amount of GPUs that fit their use case. So we do see a lot of tailoring now around the size of the deployments based on how many GPUs they want to deploy their data center,” McCool said.

“As our customers continue with AI deployments, they’re also preparing their front-end networks. New AI clusters require new high-speed connections into the existing backbone. These new clusters also increased bandwidth on the backbone to access training data, capture snapshots and deliver results generated by the cluster,” McCool said. 

“Next-generation data centers integrating AI will contend with significant increases in power consumption while looking to double network performance. Our tightly coupled electrical and mechanical design flow allows us to make system-level design trade-offs across domains to optimize our solutions,” McCool said. 

“Finally, our development operating software with SDK integration, device diagnostics and data analysis supports a fast time to design and production with a focus on first-time results. These attributes give us confidence that we will continue to execute on our roadmap in this rapidly evolving AI networking segment,” McCool said.

The evolution of the AI networking arena is expected to continue as the Ultra Ethernet Consortium, which is developing advanced Ethernet capabilities to handle the expected AI network workload, is expected to drop its 1.0 specification before the end of the year.

The UEC recently said it added 40 new vendors to the group, which now includes 97 members. 

“Critical to the rapid adoption of AI networking is the Ultra Ethernet Consortium specification expected imminently with Arista’s key contributions as a founding member,” Ullal said. “In our view, Ethernet is the only long-term viable direction for open-standard-based AI networking.”

Source:: Network World

Exit mobile version