
Nvidia announced two new professional GPUs, the RTX Pro 4000 Small Form Factor (SFF) and the RTX Pro 2000, on Monday. Built on its Blackwell architecture, the new GPUs are aimed at delivering high-performance AI and graphics capabilities in compact desktop and workstation deployments.
The RTX Pro 4000 Blackwell SFF will be equipped with 24 GB of GDDR7 ECC memory, and will deliver 770 TOPS of AI performance, 73 TFLOPS of ray-tracing, and 24 TFLOPS of single-precision compute, the company said.
The chips will feature fifth-generation Tensor Cores and fourth-generation ray-tracing cores. With 70-watt TDP, the new GPU will offer up to 2.5× the AI performance of its predecessor, making AI, graphics, and compute performance available in compact systems, the company said.
The RTX Pro 2000 Blackwell Edition will be equipped with 16 GB of GDDR7 ECC memory with 288 GB/s memory bandwidth, and will deliver 17 TFLOPS single-precision performance. It will also include advanced video capabilities with ninth-generation NVENC engines and support four mini-DisplayPort 2.1b outputs in a dual-slot form factor.
“The RTX Pro 4000 SFF and RTX Pro 2000 represent a pivotal shift in workstation-class GPU design, bringing Blackwell-class AI acceleration and advanced ray tracing into 70W small form factor cards that fit existing enterprise footprints. By delivering up to 2.5x AI throughput and 1.7x ray-tracing uplift over their predecessors in the same thermal and power envelope, these GPUs enable AI inference, model fine-tuning, and high-end 3D workloads in locations where rack space, power budgets, and cooling headroom are fixed,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research.
The battle for compact AI GPU supremacy
The new RTX Pro 4000 SFF and RTX Pro 2000 GPUs face direct competition from Intel’s Arc Pro B-Series and AMD’s Radeon AI PRO R9700, which target similar AI acceleration, CAD, and visualization workloads in small form factors, with low power draw and competitive pricing.
“While AMD’s Radeon Pro series remains a primary competitor, offering strong performance for similar professional workloads, Nvidia maintains its market lead through its mature and widely-adopted software ecosystem,” said Neil Shah, vice president at Counterpoint Research.
He added that the CUDA platform continues to be the industry standard for AI and parallel computing, giving Nvidia a distinct advantage. While AMD’s ROCm platform is progressing, the extensive developer support, documentation, and training available for CUDA allow Nvidia to widen the performance and efficiency gap in a wide array of professional and AI-centric applications.
Shah also cautioned that realizing the full potential of these GPUs requires specialist skills and infrastructure.
“Today’s engineers and data scientists need to be proficient in popular AI frameworks like TensorFlow and PyTorch, but they also need to understand how to optimize their code with Nvidia-specific libraries, such as TensorRT. Enterprises must also be prepared to manage and deploy these GPU-accelerated systems, ensuring that their IT infrastructure, including power and cooling, is capable of supporting the high-performance demands of these workstations,” added Shah.
On-prem AI gains new momentum
Nvidia’s latest small form factor GPUs offer CIOs a way to expand on-premises AI capabilities without major infrastructure changes. The compact and energy-efficient design allows AI and graphics workloads to run locally, reducing reliance on cloud resources, minimizing latency, and improving data privacy.
“The new RTX Pro series compresses enterprise-grade AI capability into a format that can be integrated without electrical rewiring or space retrofits. This creates new options for CIOs managing latency-sensitive or compliance-bound workloads, such as medical imaging, engineering simulation, or financial modelling, to run them entirely within office workstations,” added Gogia.
However, CIOs will need to weigh deployment strategies carefully.
“CIOs will have to develop robust total cost of ownership models for their specific AI implementation program to compare on-prem deployment CAPEX and OPEX cost structures vs Hyperscalers provided cloud solutions along with value proposition quotient in each case, stated Danish Faruqui, CEO of Fab Economics.
Rollout plans, market potential
The RTX Pro 4000 Blackwell SFF Edition and RTX Pro 2000 Blackwell GPUs are scheduled to launch later this year. The RTX Pro 4000 Blackwell SFF Edition will be available through major OEMs including Dell Technologies, HP, and Lenovo. The RTX Pro 2000 will be distributed via PNY, TD SYNNEX, and system builders such as BOXX, Dell Technologies, HP, and Lenovo.
“Nvidia’s new GPUs with 2.5× faster AI acceleration for smaller workstations uniquely enable pioneering entry into the next five-year on-prem AI total addressable market, estimated to be over $280 billion in cumulative enterprise opportunity for workstation-specific GPUs,” added Faruqui.
Source:: Network World