For the first time since its inception, Nvidia is planning to manufacture AI supercomputers entirely within the United States, marking a significant pivot toward domestic production.
The initiative includes over a million square feet of manufacturing space in Texas, partnering with Foxconn in Houston and Wistron in Dallas, with mass production expected to ramp up over the next 12 to 15 months.
Nvidia plans to invest $500 billion in building the infrastructure required for this manufacturing and supercomputing initiative.
What kind of supercomputers will Nvidia build?
Nvidia’s focus is on constructing AI supercomputers powered by its latest Blackwell chips. These systems are designed to handle the most demanding AI workloads, including large language models and complex simulations.
The Blackwell architecture promises significant improvements in performance and energy efficiency, positioning Nvidia at the forefront of AI hardware innovation.
Customers for these supercomputers include large-scale data centers, hyperscalers such as Amazon, Microsoft, and Google, national laboratories, elite research universities, and private enterprises focused on advanced AI model development and simulation.
Supercomputers are also used in healthcare, weather forecasting, and defense.
The world’s most powerful supercomputers
As of November 2024, the TOP500 list ranks the most powerful supercomputers based on their LINPACK benchmark performance. Here are the top ten:
- El Capitan (USA) – 1.742 exaFLOPS – Estimated cost: $600 million
- Frontier (USA) – 1.35 exaFLOPS – Estimated cost: $500 million
- Aurora (USA) – 1.01 exaFLOPS – Estimated cost: $500 million
- Eagle (USA) – ~0.561 exaFLOPS – Estimated cost: $400 million
- Fugaku (Japan) – 0.442 exaFLOPS – Estimated cost: $1 billion
- LUMI (Finland) – 0.379 exaFLOPS – Estimated cost: $200 million
- MareNostrum 5 (Spain) – 0.314 exaFLOPS – Estimated cost: $151 million
- Leonardo (Italy) – 0.306 exaFLOPS – Estimated cost: $240 million
- Perlmutter (USA) – 0.250 exaFLOPS – Estimated cost: $146 million
- JUWELS Booster (Germany) – 0.246 exaFLOPS – Estimated cost: $200 million
The term “supercomputer” typically refers to a system capable of performing at or near the highest operational rate for computers.
These systems rely on parallel processing and large-scale architectures to achieve speeds measured in petaFLOPS or exaFLOPS.
While Apple described the Power Mac G4 as a “desktop supercomputer” in 1999 due to its ability to exceed a gigaflop, actual supercomputers remain far out of reach for average consumers due to cost, power consumption, and application complexity.
However, cloud-based access to supercomputing capabilities has made them more widely available than ever.
Building the supercomputers: Partners and ecosystem
Nvidia’s ambitious project involves a robust ecosystem of partners:
- TSMC: Manufacturing Blackwell chips at its Phoenix, Arizona facility.
- Foxconn: Constructing supercomputer manufacturing plants in Houston, Texas.
- Wistron: Building additional manufacturing facilities in Dallas, Texas.
- Amkor and SPIL: Handling chip packaging and testing operations in Arizona.
This collaborative effort aims to establish a resilient and efficient supply chain for AI supercomputing infrastructure within the US.
Automation systems in use
Automation is central to Nvidia’s manufacturing strategy. The company leverages its own AI tools to enhance production efficiency:
- Isaac Sim: A simulation platform built on Nvidia Omniverse that enables developers to simulate and test AI-driven robotics solutions in physically based virtual environments.
- Omniverse: A platform for creating and operating metaverse applications, allowing for real-time collaboration and simulation across various industries.
Beyond simulation, the factories run by Foxconn, TSMC, and Wistron are known to use extensive automation, including:
- autonomous mobile robots (AMRs) to transport components between production lines
- robotic arms and collaborative robots, or cobots, for tasks such as chip placement, soldering, and inspection
- automated optical inspection (AOI) systems for quality control
- automated wafer handling systems in cleanrooms to reduce contamination risk
- robotic packaging and labeling cells for finished chipsets
These systems reduce labor costs, improve precision, and minimize error rates across manufacturing lines.
Implications for the US semiconductor sector
Nvidia’s investment signifies a strategic shift in the US semiconductor landscape. By localizing production, the company aims to mitigate risks associated with global supply chain disruptions and geopolitical tensions.
The CHIPS and Science Act, passed in 2022, allocated over $50 billion in subsidies and tax incentives to bolster domestic semiconductor manufacturing.
It has encouraged companies like Intel, TSMC, and Micron to expand operations in the US, and Nvidia’s plan is likely influenced by this trend.
However, ongoing trade restrictions and tariffs on Chinese-made chips and technology equipment may have also pushed Nvidia toward reshoring production as a risk mitigation strategy.
This move aligns with broader efforts to bolster domestic manufacturing capabilities, enhance technological sovereignty, and ensure secure, high-performance AI infrastructure within the United States.