OpenAI Taps AWS in $38 Billion Cloud Partnership Deal
Seven-year deal gives OpenAI immediate access to hundreds of thousands of Nvidia chips on AWS while its Azure pact continues.
Topics
News
- Analog Devices rolls out CodeFusion Studio 2.0
- US Investment Manager Vanguard Picks Hyderabad for First India GCC
- Infosys Launches Topaz Fabric to Modernize IT Ops
- OpenAI Taps AWS in $38 Billion Cloud Partnership Deal
- Apple Pushes Siri AI Launch to 2026 Amid Technical Challenges
- Musk, Altman Reignite Feud as OpenAI Reshapes Under New Foundation
OpenAI has signed a seven-year, $38 billion deal to run core AI workloads on Amazon Web Services (AWS), widening its cloud base beyond Microsoft Corp.
OpenAI will start using AWS immediately, with all planned capacity targeted to be online by end-2026 and room to expand in 2027 and beyond, the companies said.
Under the deal, OpenAI will begin operating workloads on AWS infrastructure, tapping into “hundreds of thousands” of Nvidia graphics processing units (GPUs) across US data centers.
The initial phase will use existing AWS facilities, with plans to expand capacity in the coming years.
“Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era.”
AWS chief Matt Garman called the pact a vote of confidence in Amazon’s “best-in-class infrastructure” and the breadth of optimized compute available now.
The move follows OpenAI’s corporate restructuring and comes alongside its Microsoft relationship, not in place of it.
Dave Brown, vice-president of compute and machine learning services at AWS, said OpenAI will run on dedicated infrastructure. “Some of that capacity is already available, and OpenAI is making use of that,” he said.
Reuters reported last week that OpenAI agreed to purchase about $250 billion of Azure services as part of that restructuring, underscoring a multi-cloud strategy rather than a single-provider shift.
For Amazon, the deal adds a marquee AI customer as it races to meet demand for compute. AWS has said it can scale clusters beyond 500,000 chips and is rolling out new infrastructure to support generative-AI training and real-time services like ChatGPT responses.
Amazon separately plans a multi-billion-dollar data-center campus in Indiana, reflecting broader capacity expansion.
The companies also pointed to model availability on AWS. Amazon said OpenAI’s open-weight foundation models are offered via the Bedrock service, with customers including Peloton, Thomson Reuters, Comscore, Triomics and Verana Health.
Despite the new alliance, OpenAI will continue its collaboration with Microsoft. The company recently confirmed plans to spend an additional $250 billion on Microsoft’s Azure cloud services, suggesting the partnerships will coexist rather than compete.
For Amazon, the contract adds another major AI client while it continues to back OpenAI rival Anthropic, in which it has invested billions. AWS is also constructing an $11 billion data-center campus in Indiana dedicated to Anthropic’s workloads.
“The breadth and immediate availability of optimized compute demonstrates why AWS is positioned to support OpenAI’s large-scale AI workloads,” said Matt Garman, chief executive, AWS.
The infrastructure will support both training and inference tasks, including real-time ChatGPT responses. The agreement currently covers Nvidia’s Blackwell chips but may later expand to other processors.