AI, power availability and Intel’s future

As 2025 kicks off, it’s time to dust off the crystal ball. Here are my data center predictions for the coming year.

1) Trump backs off on tech tariffs

For all the handwringing, Trump has a history of not following through on his threats. Either he will figure it out, or someone will point out that tariffs will not fix the problem of made-in-America chips. Likewise, he will deliver the money needed for the CHIPS Act. Ohio is a big beneficiary of the funds, and I cannot see Vice President-elect JD Vance allowing his home state to be denied.

2) Intel hires an outsider

While Intel does have talent in the executive suites, its senior leaders are not ready for the task ahead of them. And it could be that fresh eyes are needed to address the problems. Intel’s turnaround specialist will come from outside the company, the first CEO not to have risen through the ranks. Some people are bullish on Qualcomm CEO Cristiano Amon.

3) AI needs to deliver, or spending trails off

The amount of investment in AI hardware is exorbitant, and while this has made Nvidia shareholders very happy, other people are not as enthusiastic. OpenAI, Google, Microsoft, Meta, X, and more are going to start to feel pressure to deliver revenue. Right now, they’re not getting a return on their investment, and that will have to change or spending will slow.

4) Hyperscalers go into the power business

This is an easy prediction to make because it’s already happening. Google and Microsoft are investing in power utilities because the grid is stressed to its limits and the demands of AI are enormous. The public utilities simply cannot deliver on the growing power needs of AI data centers, and the hyperscalers cannot sit around and wait for the power companies to get their act together. They’re going to go into business for themselves and provide their own power. And given how demanding their AI needs are, the public utilities probably won’t object.

5) AI improves HPC

One area where AI can make a difference is in operations, both in optimization of data center operations and improved security detection. Supercomputers are not just plug-and-play systems that can automatically scale up. They need to be optimized and tuned for maximum load balancing and scalability. We will start to see AI used to optimize operations and improve performance, and this will be reflected in the Top500 supercomputer rankings with a big jump in overall performance by all players. Likewise, AI will be used to improve security, especially in the zero trust scenarios, to isolate questionable nodes and systems.

6) Liquid cooling takes off

The heat generated by AI systems has already driven the need for liquid cooling because air cooling is simply not enough. But it’s not just GPUs that run hot – new CPUs from Intel and AMD are pretty toasty as well. I expect to see an increase in self-contained liquid cooling that will go into existing data centers and not require a hefty retrofit. I also think that HPE and Dell will finally do their own liquid cooling, similar to Lenovo’s Project Neptune. Up until now, HPE and Dell have been content to leave liquid cooling to third parties, but they may have to finally do their own thing.

7) Intel splits

There’s just no way around it. Intel must spin off its fabrication business just like AMD did in 2008. It was expensive, painful and necessary for long-term success. Gelsinger simply didn’t have the bandwidth to manage Intel foundries and Intel products, and all three suffered for it: The fab business was slow to get started, the chip business fell behind AMD, and Gelsinger’s tenure was cut short. It’s time to cut the fabs loose, Intel.

8) Continued predictions of on-prem exodus

This completely inaccurate prognostication will continue, and it will continue to be wrong. There are too many reasons to keep an on-premises data center, starting with data privacy and integrity. Repatriation of data from the cloud to on-premises infrastructure goes on every year. On-premises data centers will die when mainframes do.

9) GPU utilization becomes paramount

Nvidia is showing no signs of cutting power draw, so it’s up to others to make these things work as efficiently as possible. That means maximum utilization and scaling of hardware. As a result, maximizing GPU utilization will become the primary design goal for modern data centers. This will drive innovations in hardware and software to sustain the infrastructure necessary for training and minimize latency, lagging, and other issues that cause pauses in training. Success will be measured by how effectively data centers can keep their GPU resources utilized.

10) Power constraints impact data center locations

With almost 500 data centers in the Virginia area, it’s safe to say that region is reaching its limit. The same applies for Texas and Santa Clara. The demand for large-scale processing of data for AI, data analytics, and quantum computing will shift where new data centers are built. The good news is that these applications, especially AI training, are not latency sensitive, so they can afford to be placed in a remote location that has a lot of land and a lot of cheap power. This will largely involve data centers dedicated to massive computational processes, so there’s no need to worry that your colocation provider will open up shop in the Sierra Mountains.

Source:: Network World