Ampere has announced it has begun shipping its next-generation AmpereOne processor, a server chip with up to 192 cores and special instructions aimed at AI processing.
It is also the first generation of chips from the company using homegrown cores rather than cores licensed from Arm. Among the features of these new cores is support for bfloat16, the popular instruction set used in AI training and inferencing.
“AI is a big piece [of the processor] because you need more compute power,” said Jeff Wittich, chief products officer for Ampere. ”AI inferencing is one of the big workloads that is driving the need for more and more compute, whether it’s in your big hyperscale data centers or the need for more compute performance out at the edge.”
To read this article in full, please click here
Source:: Network World – Data Center