Skip to content

HPE to ship a dedicated inference server for the edge

Later this month, HP Enterprise will ship what looks to be the first server aimed specifically at AI inferencing for machine learning.

Machine learning is a two-part process, training and inferencing. Training is usign powerful GPUs from Nvidia and AMD or other high-performance chips to “teach” the AI system what to look for, such as image recognition.

[ Get regularly scheduled insights by signing up for Network World newsletters. ]

Inference answers if the subject is a match for trained models. A GPU is overkill for that task, and a much lower power processor can be used.

To read this article in full, please click here

Source:: Network World – Data Center