Site icon GIXtools

HPE and Nvidia unveil private cloud for AI

Hewlett Packard Enterprise has introduced a portfolio of AI products and services jointly developed with Nvidia that it hopes will help enterprises realize the productivity benefits of generative AI easily and quickly — ideally by using its products on-premises or in hybrid clouds.

“Generative AI has tremendous potential for transforming businesses,” said HPE CEO Antonio Neri in a speech at HPE’s Discover event in Las Vegas, “but the complexity of fragmented AI technology poses too many risks and obstacles that can make it difficult for organizations to adopt at scale and put an organization’s most valuable asset — its proprietary data — at risk.”

To remedy this, HPE and Nvidia have jointly developed a turnkey private cloud for AI in a first step  , Neri explained. This allows companies to focus their resources on developing new AI use cases.

Turnkey private cloud for AI

HPE Private Cloud AI integrates Nvidia GPUs, networks, and software with HPE’s AI memory, AI computing power and GreenLake cloud to provide enterprises with an energy-efficient, fast and flexible way to sustainably develop and deploy generative AI applications, the companies said. For example, Nvidia’s AI enterprise software accelerates data science pipelines and optimizes the development and deployment of production-ready copilots and other GenAI applications. At the same time, the Nvidia Inference Micrososervices (NIM) included in Nvidia AI Enterprise are intended to ensure optimized AI model inference, enabling a smooth transition from prototype to secure deployment of AI models.

HPE also offers preconfigured AI and Data Foundation tools as a supplement with its AI Essentials software. They provide customizable solutions, ongoing enterprise support, and reliable AI services such as data and model compliance and extensible capabilities through a unified control plane. According to HPE, this is to ensure that AI pipelines are compliant, explainable and reproducible throughout the AI lifecycle. In addition, HPE Private Cloud AI is powered by the new OpsRamp AI Copilot, which helps IT operations improve workload and IT efficiency.

Fidelma Russo, CTO and head of HPE’s Hybrid Cloud Business Unit, highlighted the solution’s quick start-up: “You plug in the device, connect it to the Greenlake cloud, and three clicks later, boom, your data science and IT operations teams are ready to go and start using the Nvidia software,” she said. “We also offer an evergreen experience, so you always have the latest version of HPE software, Nvidia’s AI enterprise software and NIM software just a click away.”

Four configurations available

To support a wide range of AI workloads and use cases, Private Cloud AI is offered in four configurations. Each of them has a modular design and can be expanded or equipped with additional capacities over time. At the same time, HPE’s Greenlake Cloud is designed to enable a consistent, cloud-managed experience.

“You can start with a few small model inference piles and then scale to multiple use cases with higher throughputs,” Russo explained. “And you can  do retrieval augmented generation (RAG) or LLM fine-tuning within one solution.”

As for GPUs, HPE offers the option to start with an L40S-based system and scale it up to a GH200. “We are committed with our partner Nvidia to introduce the latest models that are suitable for these use cases,” says the HPE CTO.

HPE expects to make HPE Private Cloud AI generally available in the fourth quarter.

Source:: Network World

Exit mobile version