Nvidia touts MLPerf 3.0 tests; Enfabrica details network chip for AI

AI and machine learning systems are working with data sets in the billions of entries, which means speeds and feeds are more important than ever. Two new announcements reinforce that point with a goal to speed data movement for AI.

For starters, Nvidia just published new performance numbers for its H100 compute Hopper GPU in MLPerf 3.0, a prominent benchmark for deep learning workloads. Naturally, Hopper surpassed its predecessor, the A100 Ampere product, in time-to-train measurements, and it’s also seeing improved performance thanks to software optimizations.

MLPerf runs thousands of models and workloads designed to simulate real world use. These workloads include image classification (ResNet 50 v1.5), natural language processing (BERT Large), speech recognition (RNN-T), medical imaging (3D U-Net), object detection (RetinaNet), and recommendation (DLRM).

To read this article in full, please click here

Source:: Network World – Data Center