AI workloads to transform enterprise networks

As companies put more AI into production, the explosive growth of the technology at both the edge and in the data center is creating demands for bandwidth, latency, and architectural flexibility that traditional networks weren’t designed to deliver.

The latest data from Omdia finds that all AI traffic — including net new AI applications and AI-enhanced applications — accounted for 39 exabytes of total network traffic in 2024. Non-AI traffic of AI-enhanced applications totaled 131 exabytes, and conventional application traffic accounted for 308 exabytes, according to Omdia research director Brian Washburn.

In 2025, those 39 exabytes of AI traffic will double to 79 exabytes, Omdia expects, and AI traffic will continue to grow at a rate far outpacing conventional traffic. In 2031, AI traffic will overtake conventional traffic, Washburn predicts.

Net new AI traffic includes such use cases as apps driven by visual processing, surveillance, new games and media, and AI content generation. AI-enhanced traffic includes smart transcription services and content summaries, code assistance and review, intelligent analytics, natural language queries, and content filters. And that doesn’t include fully private networks such as hyperscaler operation, on-prem traffic, and campuses.

For all enterprise traffic, a recent source of data is Zscaler, which released a report in March that showed an unprecedented 3,464% increase in enterprise AI activity over the course of one year. According to the report, in the last 11 months of 2024, 3,624 terabytes of data was transferred to and from over 800 AI applications such as ChatGPT.

One company that’s already seeing an impact is Salesforce, which has added both generative AI and agentic AI capabilities to its cloud-based CRM platform.

“We’re seeing a significant increase in data processing and transfer, particularly as we handle larger datasets for model training and real-time inference,” says Paul Constantinides, the company’s executive vice president of engineering. This translates to a demand for higher bandwidth, lower latency, and more robust network infrastructure.

All this increase in AI activity will make enterprises rethink their data center networking, the cloud networking, their edge networking, and their network security.

AI networking in the data center

In data centers, AI poses two different types of networking challenges. For model training, there’s a lot of traffic between individual GPUs and servers.

“The demand for massive amounts of resources — particularly CPU and GPU — is driving a new zone within the enterprise data center dedicated to AI,” says Lori MacVittie, distinguished engineer and chief evangelist in the Office of the CTO at Seattle-based F5 Networks. “These AI factories have special networking needs related to traffic steering that require smarter networking, new security capabilities, and the ability to handle higher volumes of data.”

That’s going to translate to a lot of new spending.

“AI, and particularly generative AI, is a major growth driver in data center Ethernet switching,” says Brandon Butler, senior research manager for enterprise networks at IDC, in a March report. That’s causing a renaissance in the data center portion of the Ethernet switch market, he says. The research firm is forecasting growth in the generative AI data center Ethernet switch market from $640 million in 2023 to more than $9 billion in 2028.

In addition, enterprises are beginning to experiment with agentic AI. Agentic AI is where individual AI-powered agents collaborate to carry out complex tasks, create code, or execute entire business workflows. It is often done on prem or in private clouds in order to reduce costs and latency — and to keep all corporate data safe and secure.

Agentic AI traffic flows are expected to be dramatically different from the predictable, deterministic traffic created by traditional applications, though it’s not clear yet what exactly those differences will be.

“How all these connections are going to flow across the network is unknown — and almost can’t be known if the agentic AI can orchestrate based on what needs to be done,” says F5’s MacVittie. “You can’t predict it,”

AI in the cloud

Once models are put into production, however, the traffic will flow outside the data center, between the models and the end users.

“Inference requires strong wide area and multi-site connectivity, which is different than training’s network topology, which requires dense local networks,” says Jason Carolan, chief innovation officer at Charlotte, NC-based Flexential.

And flexibility is key, he adds. “Since many AI workloads are in proof-of-concept or experimentation mode for some time, network connections, topology, and capacity needs may change based on new models, new data, or new end-points,” Carolan says.

Some enterprises are already prepared to handle AI traffic, says Derek Ashmore, application transformation principal at Asperitas Consulting. That’s because they’ve already begun to move away from inflexible, hard-to-maintain legacy networks, he says. The move to modern cloud networking has been going on for a while, he adds, and was kicked into overdrive during the COVID pandemic. “Even without COVID, that move would have happened, it just would have happened at a slower rate,” Ashmore says.

That’s a good thing, since it sets up enterprises for the challenges coming with generative AI.

For example, multi-modal AI applications process text, images, audio and video — and queries and responses can be very large. For example, Google’s latest Gemini 2.5 model has a context window size of a million tokens, with two million coming soon.

Two million tokens is around 1.5 million words. For reference, all the Harry Potter books put together contained around one million words. Big context windows allow for longer, more complicated conversations — or for AI coding assistants to examine larger portions of a code base. Plus, the AI’s answers are dynamically generated, so there’s no way to cache requests in most instances.

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways.

Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says.

Another AI-related challenge that networking managers will need to address is that of multi-cloud complexity.

“We have a dispersion in terms of different hyperscale clouds, private clouds, and specialty clouds that just do special things,” says Zac Smith, former Equinix executive and community member at New York-based Sustainable & Scalable Infrastructure Association.

For example, there are companies like CoreWeave that offer cloud-based GPUs. There are database companies and data lakes. There are AI platforms provided by hyperscalers, and there’s AI running on-prem, in colos, and in private clouds.

“These are all new environments, and people now have to solve connectivity issues among very different types of clouds,” says Smith.

Smith recently researched the different networking paradigms of Amazon and Google. “There are a lot of similarities,” he says. “You can connect to other third parties in the same regions, peer, do fabric — but they all have different ways of doing it and it’s not all normalized.”

AI at the edge

Finally, there’s edge AI, which poses its own set of networking challenges. Latency becomes critical, especially for mission-critical applications like self-driving cars, factory robots, and medical devices.

Other enterprise use cases for AI workloads include AI-powered security controls for video surveillance cameras and quality control in manufacturing environments, says Flexential’s Carolan. Or a retail beauty store might have a platform to allow customers to virtually try products, he says.

Edge AI requires processing capabilities closer to data sources to reduce latency and bandwidth usage, says Salesforce’s Constantinides. Low-latency edge networks, like CDNs, can help, he adds.

AI and network security

AI brings in a whole host of potential security problems for enterprises. The technology is new and unproven, and attackers are quickly developing new techniques for attacking AI systems and their components.

That’s on top of all the traditional attack vectors, says Rich Campagna, senior vice president of product management at Palo Alto Networks. At the edge, devices and networks are often distributed which leads to visibility blind spots,” he adds. That makes it harder to fix problems if something goes wrong.

Palo Alto is developing its own AI applications, Campagna says, and has been for years. And so are its customers. “For example, I recently met with a retailer who is rearchitecting its store networks to support AI-powered inventory management at the edge,” he says.

Networks need to adapt, he says. “Ensure that, regardless of where the asset is deployed, there are protection mechanisms in place, as close as possible to that asset.”

And all the security challenges are magnified with agentic AI.

It’s a problem that F5’s MacVittie is already seeing. For example, when a company operates on zero trust and least privilege, how does it handle agent identities, credentials, and privileges? “All the traditional tools we use to enforce roles and credentials don’t work suddenly because they don’t have roles or credentials,” MacVittie says. “Or we give them root access — and that makes security folks twitch.”

As AI proliferates across internal networks, the need for fine-grained security becomes critical, says Sanjay Kalra, product leader at Zscaler.

But there’s another network security aspect to AI — the possibility that employees might upload sensitive data to public AI platforms or apps.

According to Kalra, Zscaler’s enterprise customers blocked 60% of all AI transactions. Some companies turn off all access to public AI apps, while others look for indications that an employee is sharing financial data, personally identifiable information, medical data, or source code. Zscaler blocked 2.9 million attempts to upload this kind of data to ChatGPT alone. The most common DLP violation? Social security numbers.

Finally, there’s one more type of unwanted AI traffic haunting enterprise networks: hackers. According to Bugcrowd’s annual hacker survey, released in October, 86% of hackers say that AI has fundamentally changed their approach to hacking.

Now, this was a survey of “white hat” hackers — the good guys. The bad guy hackers don’t take surveys. But they do also use AI. According to an October report by Keeper Security, 51% of IT and security leaders say that AI-powered attacks are the most serious threat facing their organizations.

Attackers are using AI to create better spam and more of it, to guess passwords, for reconnaissance, and more. Luckily, network security managers are getting their own AI, with all the top security vendors investing heavily in this area.

Source:: Network World