OpenAI CEO Sam Altman has indicated that more large-scale infrastructure agreements are in development, following recent multibillion-dollar deals with both Nvidia and AMD. The announcement of the AMD partnership reportedly came as a surprise to Nvidia’s chief executive, Jensen Huang.
Nvidia CEO surprised by AMD deal
In an appearance on CNBC’s Squawk Box, Huang said he was not aware of OpenAI’s partnership with his company’s main competitor before it was publicly announced.
AMD stock soars following the landmark OpenAI AMD GPU partnership
When asked if he knew about the deal, Huang replied, “Not really.” The exchange came shortly after Nvidia itself had finalized an agreement to invest up to $100 billion in OpenAI.
Contrasting deal structures
The two partnerships are structured very differently, highlighting OpenAI’s flexible approach to securing the massive computing resources it needs.
- The AMD deal: AMD will grant OpenAI substantial tranches of its corporate stock over several years, potentially amounting to as much as 10% of the company. In return, OpenAI will use and help develop AMD’s next-generation AI GPU chips, effectively becoming a significant shareholder in the chipmaker.
- The Nvidia deal: Nvidia has invested directly in OpenAI, becoming a shareholder in the AI company. This is a direct capital infusion from the hardware manufacturer into the AI developer.
Some critics have described both deals as “circular,” noting that the chipmakers are, in effect, helping to finance OpenAI’s massive hardware purchases.
OpenAI’s shift to a self-hosted hyperscaler
For years, OpenAI has accessed Nvidia’s technology through cloud service providers like Microsoft Azure and Oracle OCI. The new partnership marks a shift in this strategy. “This is the first time we’re going to sell directly to them,” Huang explained, noting that the sales include complete systems and networking gear, not just GPUs.
According to Huang, the goal of this direct supply chain is to prepare OpenAI to become its own “self-hosted hyperscaler.” This indicates a long-term strategy for OpenAI to build and manage its own data center infrastructure, reducing its reliance on third-party cloud services. Huang estimated that building this infrastructure would cost OpenAI between “$50 to $60 billion” per gigawatt of data center capacity.
A trillion-dollar year of infrastructure deals
In 2025, OpenAI has already initiated several massive infrastructure projects.
- A $500 billion Stargate deal with Oracle and SoftBank for 10 gigawatts of U.S.-based data centers.
- A separate $300 billion cloud agreement with Oracle.
- The Nvidia partnership, covering at least 10 gigawatts of AI data centers.
- The new AMD deal, accounting for another 6 gigawatts.
Some industry estimates place the total value of deals signed by OpenAI this year at around $1 trillion.
Altman confirms more deals are coming
In a pre-recorded interview on the a16z Podcast, Sam Altman confirmed that the recent series of high-value partnerships is not yet finished. “You should expect much more from us in the coming months,” he stated.
Altman justified the aggressive expansion by citing his confidence in the capabilities of OpenAI’s future products, which he believes will create an exponential increase in demand. He acknowledged the significant gap between the company’s current revenue—reportedly $4.5 billion in the first half of 2025—and the scale of its infrastructure investments. However, he expressed confidence that the bets would pay off.
“I’ve never been more confident in the research road map in front of us and also the economic value that will come from using those [future] models.”
Altman concluded by emphasizing that achieving this vision requires a broad coalition of industry support, as OpenAI cannot finance the expansion on its own. “We’re going to partner with a lot of people,” he said.