Nvidia lays out plans to build AI supercomputers in the US

There was mixed reaction from industry analysts Monday over an announcement from Nvidia that it plans to produce AI supercomputers entirely in the US for the first time.

The company said in a blog post that, together with its manufacturing partners, it has commissioned more than one million square feet (92,900 square meters) of manufacturing space to build and test Nvidia Blackwell chips in Arizona, and AI supercomputers in Texas.

The post went on to say that Nvidia plans to produce “up to half a trillion dollars of AI infrastructure in the US through partnerships with TSMC, Foxconn, Wistron, Amkor, and SPIL.”

It added, “Nvidia AI supercomputers are the engines of a new type of data center created for the sole purpose of processing artificial intelligence — AI factories that are the infrastructure powering a new AI industry. Manufacturing Nvidia AI chips and supercomputers for American AI factories will create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades.”

In reaction, Scott Bickley, advisory fellow at Info-Tech Research Group, said, “the reality of this announcement, based on the suppliers mentioned, specifically Foxconn, Wistron, Amkor, and SPIL, speaks to the migration of the chip testing, packaging, and node/rack assembly to the US to take the tariff issue off the table. The reference to chip manufacturing via TSMC is not ‘new news,’ as those investment announcements have already been made weeks ago.”

As for the job creation prediction, he said he is skeptical that this announcement will lead to hundreds of thousands of new jobs, as Nvidia claims, since employing robotics and manufacturing automation is a stated goal.

This is, said Bickley, a “job killer, not a job enabler. The reality is that this announcement likely seeks to appease the US government’s desire to reshore this work in the US. However, the majority of the final server assembly work already takes place in Mexico today, at a much lower cost of labor than can be realized in the US.”

He said, “[it is] likely a hedge to delay tariff impacts on Nvidia products coupled with a strategy to start the process of building highly automated facilities that can close the labor arbitrage gap that exists today.”

Patrick Moorhead, founder and chief analyst at Moor Insights & Strategy, said he was waiting for something out of Nvidia, as the company has been quiet on things related to tariffs.

The company’s plans, he said, likely revolve around L11 production, “which is the creation of a fully tested system. I doubt this includes PCA (mainboard) manufacturing or they would have said this discretely. Therefore, it is still likely that the mainboards will be produced in Taiwan or Mexico. The highest-performance AI servers are a challenge to build, test, and validate.”

Therefore, said Moorhead, “it makes sense that Nvidia would do this in the US for US customers. I believe some OEMs like Dell Technologies already do L11 in the US. The wildcard here is the import duties on the different components coming in from Taiwan, China, and Mexico.”

In the announcement, Jensen Huang, founder and CEO of Nvidia, said, “the engines of the world’s AI infrastructure are being built in the United States for the first time. Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency.”

The company stated that it will utilize its advanced AI, robotics, and digital twin technologies to design and operate the facilities, including employing Nvidia Omniverse to create digital twins of factories and NVIDIA Isaac GR00T to build robots to automate manufacturing.

“Nvidia Blackwell chips have started production at TSMC’s chip plants in Phoenix, Arizona,” it said. “Nvidia is building supercomputer manufacturing plants in Texas, with Foxconn in Houston, and with Wistron in Dallas. Mass production at both plants is expected to ramp up in the next 12-15 months.”

Forrester Senior Analyst Alvin Nguyen said Monday in an email that Nvidia’s plan “makes sense in light of the fluid tariff situation. Leveraging partners such as Foxconn, Wistron, TSMC, Amcor, and SPIL will help accelerate the development of a US AI server supply chain. This helps them preserve access to AI chips and infrastructure to companies with US locations without a cost penalty/disadvantage that could help erode their AI leadership position and slow down AI adoption in the US.”

Further, the plan echoes his separate commentary last week about the tariffs, in which he predicted, “when it comes to semiconductors, there will be more foundries being built around the world as the push for geographic diversity and the supporting supply chains are deployed. This will be beneficial as there is less dependence on Taiwan for the majority of chip production.”

He added, “this may cause some changes to data center investments depending on the state of tariffs and the cost impacts by location. Enterprises may be incentivized to time buildouts to where they see better returns, but this is where sovereignty laws (AI and data) will also have an impact and can completely change the basis for where to build data centers.”

Source:: Network World