Anthropic Locks In Multi-Gigawatt Compute Deal With Google, Broadcom
Most of the new TPU capacity will be based in the United States and is expected to come online from 2027.
Topics
News
- Magicpin Fast-Tracks AI Rollout as LPG Crisis Disrupts Restaurants
- Coforge Names Sunil Fernandes COO Amid AI-Native Push
- Anthropic Locks In Multi-Gigawatt Compute Deal With Google, Broadcom
- Tech Firms Must Reskill Workers Before Layoffs, IT Body Says
- ElevenLabs Launches AI Music App to Challenge Suno, Udio
- Anthropic Buys Coefficient Bio in $400 Million Deal
[Image source: Chetan Jha/MITSMR India]
Anthropic said it has signed a deal with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, with the new compute expected to come online starting in 2027 as the AI startup moves to support rapid growth in demand for Claude.
TPUs, or Tensor Processing Units, are Google’s custom chips designed for AI workloads such as training and running large language models.
The company said the expanded infrastructure will power its frontier models and help serve rising customer demand worldwide.
Broadcom, in a Form 8-K filed on 6 April, said Anthropic will access about 3.5 gigawatts through Broadcom beginning in 2027 as part of the broader multi-gigawatt TPU capacity commitment.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure,” Anthropic Chief Financial Officer Krishna Rao said in a company statement. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”
Anthropic said its run-rate revenue has surpassed $30 billion, up from about $9 billion at the end of 2025. It also said the number of business customers spending more than $1 million a year on an annualized basis has risen to more than 1,000 from more than 500 when it announced its Series G round in February.
Most of the new compute capacity will be located in the US, expanding on Anthropic’s November 2025 pledge to invest $50 billion in American AI infrastructure. That earlier plan included data center builds in Texas and New York with Fluidstack.
The deal also deepens Anthropic’s ties with Google Cloud and Broadcom, while preserving its multi-platform approach to training and inference.
The company trains and runs Claude across a mix of hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs, allowing it to optimize performance and resilience. Amazon Web Services remains its primary training partner, with ongoing collaboration on Project Rainier.
Claude is currently the only frontier AI model available across all three major cloud platforms, AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry), a positioning that gives Anthropic unusual flexibility in an increasingly fragmented AI ecosystem.
One comment on X said: “For context, that’s enough power to run a small country. Anthropic went from ‘scrappy safety startup’ to the fastest growing enterprise software company in history in like 18 months.
Another pointed to the company’s strategic balancing act: “interesting setup: open models on one side, exclusive compute relationships on the other. Forget about choosing sides, they’re positioning themselves in both. I wonder how long that balance holds once competition tightens.”


