Google claims AI supercomputer speed superiority with new Tensor chips

A new white paper from Google details the company’s use of optical circuit switches in its machine learning training supercomputer, saying that the TPU v4 model with those switches in place offers improved performance and more energy efficiency than general-use processors.

Google’s Tensor Processing Units — the basic building blocks of the company’s AI supercomputing systems — are essentially ASICs, meaning that their functionality is built in at the hardware level, as opposed to the general use CPUs and GPUs used in many AI training systems. The white paper details how, by interconnecting more than 4,000 TPUs through optical circuit switching, Google has been able to achieve speeds 10 times faster than previous models while consuming less than half as much energy.

To read this article in full, please click here

Source:: Network World – Data Center