Site icon GIXtools

Cloudera jockeys for AI platform prominence

While the big hyperscalers are fighting it out for dominance in the AI space, many enterprises are looking for alternatives.

More than half (60%) of large companies have pulled some AI workloads from public clouds, according to a survey released this summer by Flexential. Companies cited sustainability goals, cybersecurity and data privacy, control, better performance, and cost efficiencies as their top reasons for doing so.

Still, enterprise data remains in multiple settings, according to the survey: 59% of respondents use public clouds to store the data they need for AI training and inference, 60% use colocation providers, and 49% use on-premises infrastructure.

In general, hybrid is the default position for large enterprises. Only 10% of companies use one single public cloud provider, according to Flexera’s latest survey data. The vast majority (73%) use a hybrid approach with both public and private clouds, and 14% use multiple public clouds.

That means that using one single hyperscaler’s AI stack can limit enterprise IT options when it comes to deploying AI applications. Not only does the data used to train or provide context for the AI reside in various locations, but so does the computing power.

So, there’s a race on right now for what company will be the default platform for generative AI, the operating system for the next evolution of the enterprise, the middleware that will tie it all together.

This week, Cloudera is making its bid for that role.

Cloudera’s hybrid platform for data, analytics, and AI

Cloudera is primarily a data company, helping enterprises organize their data no matter where it may reside. It has expanded into analytics and, most recently, into generative AI. Its flagship platform is the Cloudera Cloud Platform, which offers a data warehouse, data engineering, data flow, and machine learning functionality.

In June, the company acquired Verta’s Operational AI platform, which helps companies turn their data into AI-ready custom RAG applications. RAG – retrieval augmented generation – is a way to funnel up-to-date information to an AI without retraining it. It makes generative AI tools more accurate, more up-to-date, and can give them access to private information in a secure way. Verta also has a generative AI workbench, model catalog, and governance tools.

At its Evolve24 annual data and AI conference this week, Cloudera announced the Cloudera AI Inference Service, which leverages Nvidia’s NIM microservices.

Nvidia NIMs are packaged-up open-source generative AI applications, such as Llama and Mistral, together with APIs and the accelerators needed to run them most effectively on Nvidia’s GPUs. Integrating them into the Cloudera platform means that enterprises will be able to manage the full breadth of their generative AI applications, including data, training, and inference.

“Think of this as a framework to build a wide variety of AI solutions from,” says Forrester analyst Alvin Nguyen.

Any enterprises creating their own AI applications could be potential customers, he adds. “This helps accelerate development by consolidating key capabilities needed by all enterprise organizations into a single package,” Nguyen says. “You do need to know how to work with microservices, but this is something that any enterprise serious about AI should be capable of.”

Cloudera allows enterprises to choose where to store the data, train their models, and run the inference – cloud or on-prem – and manage everything in a single pane of view.

“It’s the industry’s first inference service that allows you to build an AI application and deploy it anywhere,” says Abhas Ricky, Cloudera’s chief strategy officer. “You can use that to accelerate the deployment of foundation models on any cloud or data center with the highest levels of security and governance,” Ricky says.

And that includes all the auditing, access, and compliance functionality necessary to keep enterprise data safe, he adds.

“We have more data than anyone in the world – 25 exabytes of data under management,” he says. “If you’re a large bank or manufacturer, you want to run the models on your specific data or context.” That includes RAG, fine tuning, training, and AI agents, he says.

The partnership with Nvidia means that enterprises can get preconfigured containers with optimized generative AI models with industry-standard APIs for simplified deployment and management.

The product has been in preview since June and is publicly available now, Ricky says. “We already have a massive amount of customers, including a tier-one bank, a large manufacturer, and a large oil and gas company.” However, Cloudera could not disclose customer names at this time.

For enterprises not using open-source models but proprietary AI from OpenAI or Anthropic or other providers, Cloudera has connectors to those services as well, Ricky said. “We have customers using Nvidia and customers using OpenAI and Azure.”

Enterprises can even create model gardens, with a selection of vetted AIs that their developers can use.

“In an organization, I could have 10 or 15 different use cases for AI,” says Sanjeev Mohan, principal at SanjMo consultancy and former Gartner research vice president for data and analytics. “One could be translating into French. There, Google’s LLM for translation has an advantage. Another use case could be developer productivity. If I want to write code that would be a different model. Another model could be for customer service. Another use case could be to convert Cobol to Java. So, I want a model garden so I can pick and choose the model for my use case.”

With the added support for Nvidia NIMs, some of those models will now perform dramatically better. For example, downloading Llama 3.2 and running the base model using Nvidia’s GPUs is a common deployment option. But using the optimized Nvidia NIM version will cut the cost in half, Mohan says.

Nvidia is a good choice of hardware partner for generative AI, says Ari Lightman, professor at Carnegie Mellon University. “Right now, they’re the dominant player and they have everyone’s attention,” he says.

“Instead of piling on cloud hyperscalers, enterprises now have options,” says Andy Thurai, vice president and principal analyst at Constellation Research. “Especially the customers in regulated industries are always skeptical and will never fully move to cloud. Instead of piecemealing their own AI stack to build and infer models, private AI offerings such as this can help ease the pain a little.”

Enterprises in regulated industries will benefit, he says, especially those who have kept their data in-house instead of moving it to public clouds.

It’s also a good option for companies in jurisdictions that restrict data movement, Thurai adds.

“It’s a general-purpose platform for enterprises that want to keep data on premises, or have performance or privacy concerns,” says Constellation Research analyst Holger Mueller.

The Nvidia integration isn’t the only major partnership Cloudera announced this week. It now also offers an integration with Snowflake, allowing for easy access to data and analytics between the two big data platforms.

More AI platform news

Source:: Network World

Exit mobile version