Generative artificial intelligence (genAI) like ChatGPT has to date mostly made its home in the massive data centers of service providers and enterprises. When companies want to use genAI services, they basically purchase access to an AI platform such as Microsoft 365 Copilot — the same as any other SaaS product.
One problem with a cloud-based system is that the underlying large language models (LLMs) running in data centers consume massive GPU cycles and electricity, not only to power applications but to train genAI models on big data and proprietary corporate data. There can also be issues with network connectivity. And, the genAI industry faces a scarcity of specialized processors needed to train and run LLMs. (It takes up to three years to launch a new silicon factory.)
To read this article in full, please click here
Source:: Computerworld