Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

Oracle is reportedly spending about $40 billion on Nvidia’s high-performance computer chips to power OpenAI’s new data center in Texas, marking a pivotal shift in the AI infrastructure landscape that has significant implications for enterprise IT strategies.

Oracle will purchase approximately 400,000 of Nvidia’s GB200 GPUs and lease the computing power to OpenAI under a 15-year agreement for the Abilene, Texas facility, the Financial Times reported, citing several people familiar with the matter.

The site will serve as the first US location for the Stargate project, the $500 billion data center initiative spearheaded by OpenAI and SoftBank.

The transaction is more than Oracle’s entire 2024 cloud services and license support revenue of $39.4 billion, underscoring just how much companies are now willing to invest in AI infrastructure.

For enterprise IT leaders watching their own AI budgets balloon, this deal offers a stark reminder of where the market is heading.

Breaking the Microsoft dependency creates enterprise ripple effects

The deal represents a crucial step in OpenAI’s strategy to reduce its dependence on Microsoft, a move that could reshape how enterprises access and deploy AI services. The $300 billion “startup” previously relied exclusively on Microsoft for computing power, with a significant portion of Microsoft’s nearly $13 billion investment in OpenAI coming through cloud computing credits, according to the report.

OpenAI and Microsoft terminated their exclusivity agreement earlier this year after OpenAI became frustrated that its computational demands exceeded Microsoft’s supply capacity. The two companies are still negotiating how long Microsoft will retain licensing rights to OpenAI’s models.

Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, said OpenAI’s decision to partner with Oracle represents “a conscious uncoupling from Microsoft’s backend monopoly” that gives the AI company strategic flexibility as it scales.

“As AI models scale, so does infrastructure complexity—and vendor neutrality is becoming a resilience imperative,” Gogia said. “This move gives OpenAI strategic optionality — mitigating the risks of co-dependence with Microsoft, particularly as both firms increasingly diverge in go-to-market strategies.”

Neil Shah, VP for research and partner at Counterpoint Research, said Microsoft’s vertical integration creates potential conflicts of interest with other OpenAI customers.

“Diversifying beyond Microsoft for compute resources, infrastructure opens up new partnerships, verticals and customer rolodex for OpenAI,” he said. The move also supports OpenAI’s potential IPO ambitions by providing “independence and necessary diversification instead of exposure from just one investor or customer.”

Infrastructure scale reveals cost pressures

The Abilene facility will provide 1.2 gigawatts of power when completed by mid-2026, making it one of the world’s largest data centers. The site spans eight buildings and needed $15 billion in financing from owners Crusoe and Blue Owl Capital.

At roughly $100,000 per GB200 chip based on the reported figures, Gogia said the pricing reflects a “brutal new reality” where AI infrastructure is becoming a luxury tier investment.

“This pricing level affirms that the AI infrastructure market is no longer democratizing—it’s consolidating,” he said. “Access to frontier compute has become a defining moat.”

Oracle’s competitive leap forward

Oracle’s investment positions the company to compete more directly with Amazon Web Services, Microsoft Azure, and Google Cloud in the AI infrastructure market. According to Gogia, the deal represents a significant shift for Oracle from “AI follower to infrastructure architect — a role traditionally dominated by AWS, Azure, and Google.”

Shah said Oracle Cloud Infrastructure has been “lagging behind the big hyperscalars in the cloud and AI race,” but the Stargate partnership “gives a significant impetus to OCI in this AI Infrastructure-as-a-Service race,” noting that Oracle has already been seeing “triple digit GPU consumption demand for AI training from its customers.”

The facility’s scale rivals Elon Musk’s plans to expand his “Colossus” data center in Memphis to house about 1 million Nvidia chips. Amazon is also building a data center in northern Virginia larger than 1 gigawatt, showing how the AI infrastructure arms race is heating up across the industry.

Stargate’s global ambitions

The Abilene project fits into Stargate’s broader plan to raise $100 billion for data center projects, potentially scaling to $500 billion over four years. OpenAI and SoftBank have each committed $18 billion to Stargate, with Oracle and Abu Dhabi’s MGX sovereign wealth fund contributing $7 billion each, the report added.

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power.

Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.”

Power demands create infrastructure dilemma

The facility’s power requirements raise serious questions about AI’s sustainability. Gogia noted that the 1.2-gigawatt demand — “on par with a nuclear facility” — highlights “the energy unsustainability of today’s hyperscale AI ambitions.”

Shah warned that the power envelope keeps expanding. “As AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,” he said. “The key question is how much is enough? Today it’s 1.2GW, tomorrow it would need even more.”

This escalating demand could burden Texas’s infrastructure, potentially requiring billions in new power grid investments that “will eventually put burden on the tax-paying residents,” Shah noted. Alternatively, projects like Stargate may need to “build their own separate scalable power plant.”

What this means for enterprises

The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

For IT leaders, the AI infrastructure game has become a battle of giants, with entry costs that dwarf traditional enterprise IT investments. Success will increasingly depend on choosing the right partners and specializing where smaller players can still compete effectively. Oracle and OpenAI did not respond to requests for comment on the development.

Source:: Network World