
Nvidia announced the RTX Pro 6000 Blackwell Server Edition GPU, offering it in a slimmer design than previously available and intending it for mid-sized, on-premises data centers. The new design was unveiled at the SIGGRAPH conference taking place this week in Vancouver, Canada.
What’s notable about the launch is that the RTX Pro 6000 servers come in a 2U rack-mount design, marking the first time RTX GPUs have been offered in that form factor. Previously, they were only available in 4U-8U designs.
Nvidia partners including Cisco, Dell, HPE, Lenovo and Supermicro will offer x86-based servers featuring two RTX Pro 6000 Blackwell GPUs in a variety of configurations. Dell, for example, announced the Dell PowerEdge R7725 2U, sporting two RTX Pro 6000 GPUs built with Nvidia’s AI Data platform.
These new 2U servers, along with the rest of the RTX Pro server line, are designed to provide all the functionality needed for AI acceleration with Nvidia hardware. The servers also come with Nvidia’s BlueField-3 DPUs and ConnectX-8 SuperNICs fully integrated.
The servers support much more than AI, however. They are designed to bring GPU acceleration to traditional CPU-based workloads, such as data analytics, simulation, video processing and graphics rendering.
Nvidia claims the new RTX Pro servers will offer up to 45x better performance and 18x higher energy efficiency compared to CPU-only 2U systems, which is the whole point of GPU acceleration. It is pitching the RTX Pro servers as a consolidation play against on-premises CPU-only AI efforts.
More Nvidia news from SIGGRAPH
The RTX Pro server wasn’t the only news at the show. Nvidia also announced two new models of its Nemotron model family – the Nemotron Nano 2 and Llama Nemotron Super 1.5 – with advanced reasoning capabilities for building smarter AI agents.
Nemotron is a family of enterprise-ready, open large language models (LLM) designed to enhance agentic AI for tasks requiring sophisticated reasoning, instruction following, coding, tool use, and multimodal (text+vision) understanding.
These models deliver high accuracy for their relative size in areas such as scientific reasoning, coding, tool use, instruction following and chat, according to Nvidia. They are designed to imbue AI agents with deeper cognitive abilities and help AI systems explore options, weigh decisions and deliver results within defined constraints.
Nvidia claims Nemotron Nano 2 achieves up to six times higher token generation throughput compared to other models its size. Llama Nemotron Super 1.5 offers top-tier performance and leads in reasoning accuracy, making it suitable for handling complex enterprise tasks.
Also, Nvidia is empowering robotics and machines to “see” and react to what they see with new AI models that can ingest visual information and think about said information.
The vendor just announced Cosmos Reason, a new open, customizable 7 billion-parameter reasoning Vision Language Models, or VLMs. VLMs allow robots and vision agents to think about what they see, just like a human. Up to now, robots have had the ability to “see,” but their reaction to what they saw was extremely limited. A VLM provides robotics with the ability to think about their actions.
Nvidia also announced two new professional GPUs, the RTX Pro 4000 Small Form Factor (SFF) and the RTX Pro 2000.
Source:: Network World