Dell rolls out private cloud package, bolsters AI offerings for on-prem development

Dell has announced expansions to its AI product portfolio with new partners, infrastructure, software and services updates.

Dell Technologies held its Dell Technologies World 2025 conference in Las Vegas and has announced new partnerships and expanded offerings for its AI Factory initiative.

The company introduced AI Factories last year, with Nvidia CEO Jensen Huang joining Dell CEO Michael Dell on stage for the keynote speech.

Dell remains firmly committed to the on-premises model and pointed to a report by Enterprise Strategy Group that found significant savings in the cost of inferencing large models on premises compared to public cloud AI-as-a-Service.

“We are definitely seeing more and more organizations try to evaluate the total TCO where to help determine where to run their AI use cases,” said Sam Grocott, senior vice president of products at Dell on a conference call briefing with journalists.

“The data that we’re seeing … is that is by far more cost effective to run on prem than in the public cloud, and that actually expands all different sizes. We see whether it’s 5,000 users or 10,000 or up to 50,000 users, we actually see up to 75% more cost effective TCO when running on prem with the Dell AI Factory than the public cloud,” he said.

Keeping with the on premises message, Dell introduced Dell Private Cloud with Dell Automation Platform. Private Cloud automates and simplifies the deployment of cloud OS stacks from the likes of Broadcom, Nutanix and Red Hat on disaggregated Dell infrastructure like Dell PowerStore and Dell PowerEdge.

Private Cloud is delivered using Dell Automation Platform, foundational software and services designed to simplify how customers deploy and operate disaggregated solutions with secure, zero touch onboarding and centralized management.

Dell claims automation helps customers provision a private cloud stack in 90% fewer steps than manual processes, delivering a cluster in just two-and-a-half hours with no manual effort.

Another part of Private Cloud is Dell NativeEdge for virtualized workloads at the edge and in remote branch offices. Critical data is protected and secured with policy-based load balancing, VM snapshots and backup and migration capabilities. Organizations can manage diverse edge environments consistently with non-Dell and legacy infrastructure support.

Dell also introduced two new servers — the PowerEdge XE9785 and XE9785L — featuring AMD’s Instinct MI350 accelerators. Dell said the new platforms deliver up to 35 times better inferencing performance compared to previous systems while also reducing cooling demands through both liquid and air-cooled options.

On the subject of cooling, Dell introduced the PowerCool Enclosed Rear Door Heat Exchanger which operate at slightly higher water temperatures, reducing the need for traditional chillers. Dell said the heat exchanger can cut cooling-related energy costs by up to 60% and allow up to 16% greater rack density without increased power consumption.

Dell also announced a new generation of PowerEdge servers using Nvidia’s Blackwell Ultra GPUs. These servers scale up to 256 GPUs per rack in liquid-cooled configurations and are designed for large language model training.

Another new high-density server is the PowerEdge XE9712 server features the Nvidia GB300 NVL72 rack-scale server optimized for training and offering a 50 fold gain in inference output and a five-fold increase in throughput.

Finally, there is the XE7740 and XE7745, which will support the Nvidia RTX Pro 6000 Server Edition and Nvidia Enterprise AI Factory validated design. They will be used for physical and agentic AI use cases like robotics, digital twins and multi-modal AI applications with support for up to 8 GPUs in a 4U chassis.

Turning to storage, Dell announced updates to its ObjectScale object storage platform to support more compact configurations and integrate with Nvidia BlueField-3 and Spectrum-4 networking. Through S3 over Remote Direct Memory Access (RDMA) support, Dell claims a 230% improvement in throughput performance, 80% lower latency, and 98% less CPU overhead.

On the networking side, Dell has introduced the PowerSwitch SN5600, SN2201 Ethernet, part of the Nvidia Spectrum-X Ethernet networking platform and Nvidia Quantum-X800 InfiniBand switches. The switches deliver up to 800 Gb/s of throughput.

On the software side, Dell AI Factory is now validated to use Nvidia NIM, NeMo microservices and Blueprints tools for retrieval-augmented generation and agentic workflows.

Dell has welcomed Intel to its AI factory with the launch of Dell AI Platform with Intel, a fully validated end-to-end package that uses Gaudi 3 AI accelerators and are designed for tasks such as large language models and computer vision.

So far there is only one model in the lineup, the PowerEdge XE9680, a purpose-built server for AI workloads. Its key features include up to eight Gaudi a accelerators with 128GB HBM memory each and 3.7TB/s bandwidth, 5th Gen Intel Xeon Processors with up to 64 cores and PCIe Gen 5 slots, 32 DIMM slots, 16-drive capacities and 6 integrated OSFP 800GbE Ethernet ports, preventing storage and connectivity bottlenecks.

Complementing the server is  the Dell PowerSwitch Z9864F-ON, designed for end-to-end Ethernet AI fabrics, with 64 ports of 800GbE and a switching capacity of 51.2Tb/s. The PowerSwitch Z-series features Dell’s Enterprise SONiC Distribution 4.5, an open-source based network OS with AI-optimized features for dynamic load balancing and advanced congestion control.

Source:: Network World