Amazon Web Services (AWS) and OpenAI have entered into a multi-year partnership that will provide infrastructure to run and scale OpenAI’s artificial intelligence workloads. The agreement, valued at $38 billion, will span seven years starting immediately.
Under the partnership, OpenAI will access AWS compute resources comprising hundreds of thousands of NVIDIA GPUs.
The company will have the ability to expand to tens of millions of CPUs to scale agentic workloads. AWS operates AI infrastructure with clusters exceeding 500,000 chips.
OpenAI secures hundreds of thousands of NVIDIA GPUs through AWS partnership deal
The infrastructure features NVIDIA GPUs, including GB200s and GB300s, clustered via Amazon EC2 UltraServers on the same network.
This design enables low-latency performance across interconnected systems. The clusters will support workloads ranging from serving inference for ChatGPT to training next-generation models.
OpenAI will begin utilising AWS compute immediately, with all capacity targeted for deployment before the end of 2026. The partnership includes the ability to expand further into 2027 and beyond.
“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone,” OpenAI co-founder and CEO Sam Altman said in a statement.
“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads,” Matt Garman, CEO of AWS added.
Earlier this year, OpenAI open weight foundation models became available on Amazon Bedrock. This made the model options accessible to millions of customers on AWS. OpenAI has become one of the most used publicly available model providers in Amazon Bedrock, with thousands of customers working with their models.
Companies using OpenAI models on Amazon Bedrock include Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health. These organisations employ the models for agentic workflows, coding, scientific analysis, mathematical problem-solving, and other applications.
The advancement of AI technology has created demand for computing power. Frontier model providers seeking to develop their models are turning to AWS for performance, scale, and security. The infrastructure deployment AWS is building for OpenAI is designed to support OpenAI’s needs with flexibility to adapt to future requirements.




