
Network infrastructure has long been described as the plumbing of the internet, with Cisco chief among the providers. Now as the AI era unfolds and the plumbing needs modernizing, Cisco is vying to secure its role as a key AI infrastructure provider across enterprise and hyperscaler networking environments.
Over the past few months, Cisco has made multiple moves – including refreshing its core switching and router portfolios, adding new software capabilities, and feeding the open-source community with important AI-based technologies – to build out its infrastructure and management capabilities for AI networking now and in the future.
“It’s a good business being in the plumbing industry – or, to go to the AI analogy of providing the picks and shovels of the AI infrastructure – agnostic of whatever happens, and just allow ourselves to be part of all these solutions as they develop. [It] is absolutely a place we want to be,” Kevin Wollenweber, senior vice president and general manager of data center, internet, and cloud infrastructure at Cisco, told Network World.
Cisco’s AI product blitz
Cisco’s AI strategy became a lot clearer to enterprise customers in June when it added two new programable Silicon One-based Smart Switches: the C9350 Fixed Access Smart Switches and C9610 Modular Core. Both are built for AI workloads, such as agentic AI, generative AI, automation and AR/VR.
Silicon One switch ASICs can run parallel operations in dedicated “slices” without interruption by other tasks, supporting more applications and allowing more complex applications to be run on the switch, according to Cisco. The idea is to ensure the network core can scale to meet the data requirements of AI traffic. In addition, the switches include support for post-quantum cryptography (PQC) to ensure campus-wide data confidentiality, even as encrypted data traverses increasingly complex, AI-driven topologies.
Another new entrant is the N9300 Smart Switch series, which is also built on the 4.8T capacity Silicon One chip. This portfolio includes built-in programmable data processing units (DPU) from AMD to offload complex data processing work and free up the switches for AI and large workload processing.
And in June, the vendor rolled out a family of five new routers, including the small branch office 8100 and the high-end data center 8500. The idea is to tie together Cisco’s SD-WAN and secure access service edge (SASE) offerings.
In addition, the company unveiled its AI POD offering with the Nvidia RTX 6000 Pro and Cisco UCS C845A M8 server package.
Cisco AI PODs are preconfigured, validated, and optimized infrastructure packages that customers can plug into their data center or edge environments as needed. The PODs include Nvidia AI Enterprise, which features pretrained models and development tools for production-ready AI, and are managed through Cisco Intersight. Intended as a blueprint for building reliable, scalable, and secure network infrastructures, the PODs are easily configurable so customers can build them for targeted workloads and expand their AI deployments as needed, Cisco said.
Another offering is the Cisco Secure AI Factory with Nvidia, which brings together Cisco security and networking technology, Nvidia DPUs, and storage options from Pure Storage, Hitachi, Vantara, NetApp, and VAST Data. The Secure AI Factory integrates Cisco’s Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications.
Enterprises aim for modernization, AI experimentation
Taken together, the new and refreshed networking gear will help enterprises move and manage AI network traffic more effectively. And there are other benefits as well.
“For me, one of the important takeaways for enterprises is the refresh of the entire portfolio and the capabilities added to the Nexus dashboard,” Wollenweber said.
Cisco recently began shipping an integrated Nexus Dashboard, version 4.1, that will let customers see and manage Application Centric Infrastructure and Nexus NX-OS VXLAN EVPN fabrics with unified data, control, and policy enforcement. The dashboard will allow customers to consolidate and control AI, LAN, SAN, and Cisco IP Fabric for Media (IPFM) systems from a single pane of glass. A new AI Assistant in Nexus Dashboard is designed to help customers spot and fix problems quickly with natural language commands and intelligent remediation recommendations.
“Even if you are an enterprise customer that’s not quite ready to start to deploy on-prem AI infrastructure, there’s a massive amount of modernization you can do within the network itself,” Wollenweber said.
For example, as customers look to upgrade their server infrastructure, they can potentially consolidate the number of servers needed, or lower the number of cores, as next-generation server cores become more powerful. That can save on per-core licensing costs, Wollenweber noted. “There’s a huge amount of consolidation they can do in the data center,” he said.
Modernization efforts will ultimately cut customer costs, save on power, drive up efficiency, and simplify operations so that customers can free up space to potentially add some of this AI infrastructure over time, Wollenweber said. “Then, too, we offer the AI PODs that will let them start to run a few of these applications and start to experiment with AI when they are ready,” Wollenweber said.
Cisco is beginning to see the emergence of sovereign AI companies that are building out capacity that can be leveraged by enterprises that don’t have resources to build their own AI systems, he added.
“Some enterprises are looking at using capacity in some of these sovereign AI data centers, such as the ones we recently invested in – HUMAIN in Saudi Arabia, or the partnership with G42, or the investment we made in BlackRock AIP,” Wollenweber said.
“Overall, AI demand in the enterprise will grow over time. But enterprise customers need to see the value, see the ROI. Also, they have to have a well-defined use case,” Wollenweber said, noting that the 12-month innovation cycles of GPU vendors can be problematic if customers choose the wrong platform. If customers “make a large investment and can’t capitalize on that investment and get value out of the equipment, then by the time they get to maturity, the next generations of GPUs are there and they want to move,” he said.
That’s another reason customers are experimenting with AI via cloud providers or sovereign AI facilities that are built for experimentation and for expanding AI use cases, Wollenweber said.
AI community building
While Cisco is focused on bolstering enterprise infrastructure, it’s also doing work with the broader the AI community by making available technology it hopes will drive further usage.
Most recently, Cisco donated its AGNTCY initiative to the Linux Foundation, which will continue to advance the AI agent management platform as an open-source project. Outshift, which is Cisco’s research and development arm, launched AGNTCY to develop AI agent discovery, identity, messaging, and observability infrastructure.
Under the auspices of the Linux Foundation, Cisco, Dell, Google Cloud, Oracle, Red Hat and more than 65 other vendors will build a wide range of industry-standard protocols and frameworks to allow AI agents to discover one another, communicate, collaborate and be managed across platforms, models, and organizations, according to the Linux Foundation and Cisco.
“But to build these collaborative systems, agents need to be able to find each other, verify their identities, and share context without expensive custom integration work. Agentic AI is now at the same inflection point the early internet faced. Brilliant individual systems that can’t talk to each other, where every agent is its own island – until common protocols emerge,” wrote Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, in a blog post about the contribution.
Agentic AI fragmentation has been accelerating with every vendor platform building its own discovery systems, identity frameworks, and messaging protocols, according to Pandey. “The missing piece isn’t smarter agents — it’s complete infrastructure that lets any agent work with any other agent, regardless of who built it or where it runs,” Pandey wrote.
In addition, The Foundation AI team at Cisco recently teamed with AI model hub Hugging Face to bolster malware protection and strengthen security across the AI ecosystem.
“As part of this expanded collaboration, Cisco Foundation AI will provide the platform and scanning of every public file uploaded to Hugging Face — AI model files and other files alike — in a unified malware scanning capability powered by custom-fit detection capabilities in an updated ClamAV engine,” wrote Cisco’s Hyrum Anderson and Alie Fordyce in a blog post about the collaboration.
Cisco launched Foundation AI in April. It’s a team within Cisco Security, created after the acquisition of Robust Intelligence and focused on developing open-source models and tools for securing the AI supply chain. ClamAV is an open-source malware detection scanner from Cisco Talos that targets malware, trojans, viruses, and other malicious threats aimed at email gateways, file and web servers.
“We’re building large fabrics for data communication, building the tools that enable the applications that exist today, the agents that are being built, as well as the management and security capabilities for those applications that are going to exist tomorrow and, in the future,” Wollenweber said.
Cisco’s AI competition
Of course, Cisco isn’t the only vendor looking to be the AI king.
Arista, for example, is running with an AI networking strategy of its own. And it’s paying dividends, literally: Arista this week reported quarterly revenue of $2.2 billion, surpassing the company’s plan by $100 million.
“Our stated goal of $750 million back-end AI networking is well on track … we do expect an aggregate AI networking revenue to be ahead of the $1.5 billion in 2025 and growing in many years to come,” Arista CEO Jayshree Ullal said during the vendor’s financial presentation. And in addition to its AI wins among the cloud titans, Arista has 25-30 neocloud and enterprise customers, Ullal said.
HPE, too, will be targeting AI networking, now that it officially closed its $14 billion acquisition of Juniper Networks in July.
The opportunity of networking for AI is exciting, said Rami Rahim, former Juniper CEO and head of a new HPE Networking business unit, during a press conference about the close of the deal.
“Here we’re building the large-scale data centers, the factories for AI that are incredibly, incredibly important in today’s environment,” Rahim said in early July. “Together, we’ll be able to innovate faster across all layers of the technology stack, from silicon to systems to software, and we’re going to have a complete solution.”
Nvidia, Extreme Networks and others will also play important AI infrastructure roles.
Source:: Network World