On-premises data centers are both growing and shrinking. How is this possible? They’re growing as the volume of enterprise GPU servers increases. The shrinkage refers to share of the market: Hyperscalers are growing much faster than on-premises data centers, which is leading to a reduction in on-prem’s share of total data center capacity.
The latest research from Synergy Research Group shows that hyperscalers now account for 44% of worldwide data-center capacity, and non-hyperscale colocation capacity accounts for another 22% of capacity. (Related: Data center costs surge up to 18% as enterprises face two-year capacity drought)
That leaves on-premises data centers with 34% of the total – a significant drop from the 56% of capacity that on-premises data centers accounted for just six years ago. By 2030, Synergy projects hyperscale operators will claim 61% of all capacity, while on-prem share will drop to just 22%.
However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth.
In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure.
“While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.
The reason hyperscale is growing so fast is a continuation of the trend from the last 10 years toward cloud services, Dinsdale said. IaaS and PaaS have been hugely popular for their flexibility, elasticity and ease of deployment when compared to companies having to buy and run their own servers.
Cloud has also stimulated new use cases and workloads. There is also the AI factor: While AI applications can be run on GPU servers in house, the bulk of new AI workloads are better served by being run on public clouds, Dinsdale said.
Source:: Network World