
Enterprises have told me from the start that cloud-hosted generative AI based on large language models isn’t going to transform their business operation. They were very hopeful when the concept of AI agents came along, because it seemed to align with their own AI thinking. Now that this AI agent story has morphed into “agentic AI,” it seems to have taken on the same big-cloud-AI flavor that they already rejected. What do enterprises want from AI agents, why is “agentic” thinking wrong, and where is this all headed?
Enterprises tend to think of AI as part of their application software toolbox, something that IBM realized early and has leveraged successfully. Of the 322 comments from people who are involved in IT development, 302 say that business processes are implemented as a series of tasks, each assigned to something—maybe a person, an external partner, or maybe a software component. When they hear “AI agent” they’re thinking about assigning one or more tasks to an AI element, an “agent.” This lets them fit AI into existing business processes and software applications without a forklift.
You can learn a lot about enterprise thinking on AI agents by listing what they are and are not. What they are, say enterprises, is software components that fit into business processes like any other application. They are either cloud or locally hosted. What they are not, enterprises say, is a chatbot or copilot, because this model is a personal productivity aid and not an “agent.” They are not specific to a technology (AI or machine learning, large or small language models). They are not necessarily autonomous, but enterprises do tend to see these agents as working independently in their missions, just as an application component would.
If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented.
All of this contrasts with what we hear about “agentic” AI, with autonomous operation leading the list. Of the 407 total enterprises, 294 said they would likely want AI agents to “recommend” or “analyze” but not implement without approval, where only 34 said they’d want them to do things on their own. The remainder saw AI agents as simply doing something in much the way software does. Create a list of problems ranked by frequency of occurrence, for example.
All this makes sense, if we assume that any business tool is most valuable where it does something good without disrupting everything else. What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.
The cloud-provider technical people I know don’t like this approach; they see it as likely to raise barriers to the use of their online generative AI services. Enterprises see their AI agent vision as facilitating cloud AI services instead. If there’s one massive AI entity doing everything, then data sovereignty almost surely means it has to be run in house. If AI use is decomposed into small steps, surely some of those steps won’t involve the disclosure of confidential company data. That means that some AI agents could be as-a-service.
There may be several deep truths to be learned here, one being that news about technology is dominated by what sellers want to sell rather than what enterprises want to buy. A more interesting one is that we may be seeing a very important fork in the road for AI.
The AI models that everyone sees have one key feature—people can use them. No, I’m not being trivial. I mean people as opposed to companies. Anyone can go to an online AI service and interact with it. This has made AI almost populist in one sense; it’s a technology as available as the Internet. But enterprises aren’t people and aren’t thinking of AI in the same way. If you wonder why enterprise views of AI don’t match the popular view, just think of what makes up the popular view—people.
If we want AI to change our lives, and not just entertain us, we need AI to be less populist and more company-list, so to speak. IBM, I think, grasped this early on, which is why so many enterprises name IBM as the most insightful partner in AI. There is an enormous opportunity for AI-as-a-software-component, for what enterprises think of AI agents, but another of those deep truths intrudes. How does AI participate in an application workflow? We don’t know, and if we don’t address the issue, then AI could hit a wall when we try to develop truly useful agents. Applications have interfaces, APIs. AI agents will need them too, if they’re to integrate with applications and current workflows, and they’ll need to do that to achieve what enterprises expect, even demand.
Source:: Network World