- Sam Altman says OpenAI will soon pass 1 million GPUs and aim for 100 million more
- Running 100 million GPUs could cost $3 trillion and break global power infrastructure limits
- OpenAI’s expansion into Oracle and TPU shows growing impatience with current cloud limits
OpenAI says it is on track to operate over one million GPUs by the end of 2025, a figure that already places it far ahead of rivals in terms of compute resources.
Yet for company CEO Sam Altman, that milestone is merely a beginning, “We will cross well over 1 million GPUs brought online by the end of this year,” he said.
The comment, delivered with apparent levity, has nonetheless sparked serious discussion about the feasibility of deploying 100 million GPUs in the foreseeable future.
A vision far beyond current scale
To put this figure in perspective, Elon Musk’s xAI runs Grok 4 on approximately 200,000 GPUs, which means OpenAI’s planned million-unit scale is already five times that number.
Scaling this to 100 million, however, would involve astronomical costs, estimated at around $3 trillion, and pose major challenges in manufacturing, power consumption, and physical deployment.
“Very proud of the team but now they better get to work figuring out how to 100x that lol,” Altman wrote.
While Microsoft’s Azure remains OpenAI’s primary cloud platform, it has also partnered with Oracle and is reportedly exploring Google’s TPU accelerators.
This diversification reflects an industry-wide trend, with Meta, Amazon, and Google also moving toward in-house chips and greater reliance on high-bandwidth memory.
SK Hynix is one of the companies likely to benefit from this expansion – as GPU demand rises, so does demand for HBM, a key component in AI training.
According to a data center industry insider, “In some cases, the specifications of GPUs and HBMs…are determined by customers (like OpenAI)…configured according to customer requests.”
SK Hynix’s performance has already seen strong growth, with forecasts suggesting a record-breaking operating profit in Q2 2025.
OpenAI’s collaboration with SK Group appears to be deepening. Chairman Chey Tae-won and CEO Kwak No-jung met with Altman recently, reportedly to strengthen their position in the AI infrastructure supply chain.
The relationship builds on past events such as SK Telecom’s AI competition with ChatGPT and participation in the MIT GenAI Impact Consortium.
That said, OpenAI’s rapid expansion has raised concerns about financial sustainability, with reports that SoftBank may be reconsidering its investment.
If OpenAI’s 100 million GPU goal materializes, it will require not just capital but major breakthroughs in compute efficiency, manufacturing capacity, and global energy infrastructure.
For now, the goal seems aspirational, an audacious signal of intent rather than a practical roadmap.
Via TomsHardware