In a surprise twist, Anthropic has announced a new agreement with Google and Broadcom. The agreement is to secure approximately 3.5 GW of next-generation tensor processing unit (TPU) compute capacity. The mammoth project will expectedly come online starting in 2027.
The deal builds on the 1 gigawatt of Google TPU capacity Broadcom is already supplying to Anthropic in 2026. Anthropic CFO Krishna Rao described it as the company’s “most significant compute commitment to date.”
As Anthropic breaks the news:
This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development… We are making our most significant compute commitment to date to keep pace with our unprecedented growth.
Under the arrangement, Broadcom acts as the intermediary between Google’s custom silicon and Anthropic’s training and inference workloads. Broadcom has also signed a separate long-term agreement with Google to design and supply future generations of custom TPU chips, along with a supply assurance deal for networking and other components through 2031. Broadcom shares rose approximately 3% in extended trading following the announcement. Analysts at Mizuho estimated Broadcom would record $21 billion in AI revenue from Anthropic in 2026 alone, rising to $42 billion in 2027.
The compute expansion is driven by Anthropic’s commercial growth. The company’s run-rate revenue has surpassed $30 billion, up from approximately $9 billion at the end of 2025. That is a more than threefold increase in roughly three months. When Anthropic closed its $30 billion Series G round in February 2026, valued at $380 billion, over 500 business customers were spending more than $1 million annually each. That number has now exceeded 1,000, doubling in under two months.
Anthropic operates a multi-vendor chip strategy that sets it apart from competitors. Claude is trained and served across Amazon’s Trainium chips, Google’s TPUs and NVIDIA GPUs. The company says Claude is the only frontier model available on all three major cloud platforms: AWS, Google Cloud and Microsoft Azure.
This approach gives Anthropic both resilience against supply disruptions and negotiating leverage across providers. The AWS relationship remains foundational, with total Amazon investment reaching $8 billion and Project Rainier, a supercomputer cluster running roughly 500,000 Amazon Trainium 2 chips in Indiana, expected to scale beyond one million chips.
The majority of the new capacity will be located in the United States, extending Anthropic’s November 2025 commitment to invest $50 billion in American AI computing infrastructure. Initial sites in Texas and New York are coming online through 2026 in partnership with Fluidstack.
The Google-Broadcom capacity expands that footprint into 2027 and beyond. Broadcom’s SEC filing noted that Anthropic’s consumption of the expanded compute is “dependent on Anthropic’s continued commercial success.” That is a standard disclosure that nonetheless underlines the financial scale of the commitment.

