
OpenAI has signed a significant compute leasing deal with Oracle, under which it will access 4.5 gigawatts (GW) of data center power, marking one of the largest single leasing arrangements in the industry.
The expansion is part of Project Stargate, operated by OpenAI, to support the US’ pursuit of artificial general intelligence (AGI), backed by a planned $500 billion investment over the next four years. Oracle is a key partner in this effort.
The announcement, reported by Bloomberg, follows Oracle inking a $30 billion single cloud deal earlier this week.
Oracle, along with development partner Crusoe, has already built a large-scale data center in Abilene, Texas, for OpenAI. The current 1.2 GW capacity there is being scaled up to 2 GW. To accommodate growing compute requirements, Oracle will now build additional data centers in collaboration with regional partners across several US states. The state will include Michigan, Wisconsin, Wyoming, New Mexico, Georgia, Ohio, and Pennsylvania.
Oracle declined to comment. OpenAI did not respond to a request for comment.
A new phase of Infrastructure planning
For CIOs, the implications are both promising and problematic. On one hand, OpenAI’s Stargate infrastructure may offer access to newer, more specialized AI compute without needing to build from scratch. On the other hand, such mega-deals are beginning to crowd the market.
“For CIOs, this shift means more competition for AI infrastructure. Over the next 12–24 months, securing capacity for AI workloads will likely get harder, not easier. Though cost is coming down but demand is increasing as well, due to which CIOs must plan earlier and build stronger partnerships to ensure availability,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. He added that CIOs should expect longer wait times for AI infrastructure. To mitigate this, they should lock in capacity through reserved instances, diversify across regions and cloud providers, and work with vendors to align on long-term demand forecasts.
“Enterprises stand to benefit from more efficient and cost-effective AI infrastructure tailored to specialized AI workloads, significantly lower their overall future AI-related investments and expenses. Consequently, CIOs face a critical task: to analyze and predict the diverse AI workloads that will prevail across their organizations, business units, functions, and employee personas in the future. This foresight will be crucial in prioritizing and optimizing AI workloads for either in-house deployment or outsourced infrastructure, ensuring strategic and efficient resource allocation,” said Neil Shah, vice president at Counterpoint Research.
Strategic pivot toward AI data centers
The OpenAI-Oracle deal comes in stark contrast to developments earlier this year. In April, AWS was reported to be scaling back its plans for leasing new colocation capacity — a move that AWS Vice President for global data centers Kevin Miller described as routine capacity management, not a shift in long-term expansion plans.
Still, these announcements raised questions around whether the hyperscale data center boom was beginning to plateau.
“This isn’t a slowdown, it’s a strategic pivot. The era of building generic data center capacity is over. The new global imperative is a race for specialized, high-density, AI-ready compute. Hyperscalers are not slowing down; they are reallocating their capital to where the future is: AI,” said Sharad Sanghi, cofounder and CEO of Neysa, an AI cloud and platform-as-a-service company.
OpenAI’s agreement with Oracle appears to signal the opposite of any perceived slowdown trend in hyperscale data center growth, especially in the context of AI.
“The OpenAI-Oracle deal, which involves building entirely new, unprecedentedly large AI infrastructure, underscores the insatiable demand for compute power that AI requires, pushing hyperscalers to secure gigawatts of capacity. This indicates that the hyperscale market is not slowing but rather undergoing a rapid, AI-driven transformation towards larger, more power-intensive facilities, with overall hyperscale data center capacity projected to nearly triple by 2030,” said Biswajeet Mahapatra, principal analyst at Forrester.
Tighter access to compute capacity
However, as hyperscalers prioritize large AI clients like OpenAI, CIOs will likely face higher costs and longer wait times for cloud and data center capacity.
“This development intensifies competition for general compute, demanding more agile and forward-thinking procurement. CIOs should proactively plan by diversifying cloud strategies, optimizing existing resources, and strategically negotiating contracts for predictable workloads,” Mahapatra cautioned.
Sanghi added that CIOs need to look past vanilla data center and compute infrastructure and look for the value capture that AI can deliver to them – and to realize that CIOs need to see how best to achieve their objectives on training, fine-tuning, and inference.
Source:: Network World