Microsoft CEO Satya Nadella announced on Thursday the deployment of the company’s first large-scale AI system, or “AI factory,” which will be used to run OpenAI workloads. In a video post, Nadella stated this is the first of many such systems that will be installed across Microsoft’s global Azure data centers.
Another first for our AI fleet… a supercomputing cluster of NVIDIA GB300s with 4600+ GPUs and featuring next gen InfiniBand.
First of many as we scale to hundreds of thousands of GB300s across our DCs, and rethink every layer of the stack across silicon, systems, and software… pic.twitter.com/EtNvnSAFr6
— Satya Nadella (@satyanadella) October 9, 2025
The deployed system is a cluster of more than 4,600 Nvidia GB300 rack computers, which feature Nvidia’s Blackwell Ultra GPUs and are connected with InfiniBand networking technology. Microsoft stated its plans include deploying “hundreds of thousands” of these GPUs globally in systems capable of running next-generation AI models with “hundreds of trillions of parameters.”
The announcement follows recent high-profile deals made by Microsoft’s partner, OpenAI, with Nvidia and AMD to build its own AI data centers. In 2025, OpenAI has reportedly made commitments estimated at up to $1 trillion for this effort, and CEO Sam Altman indicated this week that more deals were forthcoming.
In its announcement, Microsoft highlighted its existing infrastructure of more than 300 data centers in 34 countries, stating they are positioned to meet the current demands of frontier AI.