
A new player in the networking industry is tackling the unique challenges of AI infrastructure at hyperscale.
NextHop AI, founded by former Arista Networks Chief Operating Officer Anshul Sadana, announced this week that it has secured $110 million in funding to develop highly customized networking solutions specifically for the world’s largest cloud providers. Sadana spent 17 years at Arista and prior to that was at Cisco for eight years, so he’s no stranger to the networking world. While he remains bullish on Arista and its prospects, there are still other needs and other areas that existing networking vendors are not serving, in his opinion.
“The AI wave is right in front of us, and it is causing a massive disruption, including at the infrastructure level,” Sadana told Network World. “So I felt there is a gap in the market.”
The AI infrastructure challenge for networking at hyper scale
The demands of AI are changing the way data centers and particularly hyperscale data centers need to operate.
The challenge for hyperscale cloud providers is twofold: They need increasingly customized networking solutions for AI, but the pace of innovation in the AI space is making it difficult for them to build everything in-house. Sadana noted that existing networking vendors aren’t building the super customized hardware that hyperscalers need either.
The key to success for hyperscalers is being highly optimized at every level of the stack. Among the core issues are signal integrity, power efficiency per port, latency and thermal management.
Sadana argued that unlike traditional networking where an IT person can just plug a cable into a port and it works, AI networking requires intricate, custom solutions. The core challenge is creating highly optimized, efficient networking infrastructure that can support massive AI compute clusters with minimal inefficiencies.
How NextHop is looking to change the game for hyperscale networking
NextHop AI is working directly alongside its hyperscaler customers to develop and build customized networking solutions. “We are here to build the most efficient AI networking solutions that are out there,” Sadana said.
More specifically, Sadana said that NextHop is looking to help hyperscalers in several ways including:
- Compressing product development cycles: “Companies that are doing things on their own can compress their product development cycle by six to 12 months when they partner with us,” he said.
- Exploring multiple technological alternatives: Sadana noted that hyperscalers might try and build on their own and will often only be able to explore one or two alternative approaches. With NextHop, Sadana said his company will enable them to explore four to six different alternatives.
- Achieving incremental efficiency gains: At the massive cloud scale that hyperscalers operate, even an incremental one percent improvement can have an oversized outcome.
“You have to make AI clusters as efficient as possible for the world to use all the AI applications at the right cost structure, at the right economics, for this to be successful,” Sadana said. “So we are participating by making that infrastructure layer a lot more efficient for cloud customers, or the hyperscalers, which, in turn, of course, gives the benefits to all of these software companies trying to run AI applications in these cloud companies.”
Technical innovations: Beyond traditional networking
In terms of what the company is actually building now, NextHop is developing specialized network switches that go beyond traditional data center networking equipment. The company’s solutions support speeds of 1.6 terabits per port, with dense switches offering 50-100+ terabits of throughput—performance levels that were once only found in massive telecom core routers.
Sadana also emphasized that NextHop isn’t simply delivering standalone network devices. The networking switch components are increasingly delivered in different ways, integrating with server and rack deployments. “We are no longer just a separate pizza box, it’s an integrated solution that gets to the customer the way they want it.”
NextHop is also working at the software layer. While there are open-source networking stacks such as SONiC, at the hyperscaler layer, there is a need for an extreme level of customization. NextHop is a member of the Linux Foundation and works with SONiC.
“Today, what’s happening is the cloud companies have their own operating system,” Sadana said. “It might be open source, but they have lots of libraries that are actually closed within their own company.”
While open source network operating systems are foundational, NextHop isn’t creating a universal solution, but instead collaboratively developing custom software stacks tailored to each hyperscaler’s specific requirements.
Looking Ahead: NextHop’s roadmap
Currently employing around 100 people, NextHop is still in its early stages. But the company sees a clear path forward, focusing exclusively on the needs of the largest cloud providers.
“The term ‘next hop’ means the next router you send packets to, and we want to be that next router that people send packets to,” Sadana said.
NextHop AI at a glance
- Founded: 2024
- Funding: $110 million
- Investors: Lightspeed Venture Partners, Kleiner Perkins, WestBridge Capital, Battery Ventures, and Emergent Ventures
- Headquarters: Santa Clara, Calif.
- CEO: Anshul Sadana
- What they do: Customized networking hardware and software for hyperscalers to meet the needs of AI networking.
Source:: Network World