AI

Anthropic’s Claude Partners with Google Cloud in a Historic AI Deal

Anthropic has announced a major expansion of its collaboration with Google Cloud, committing to the use of up to one million Tensor Processing Units (TPUs) to accelerate large-scale AI development. The deal, valued in the tens of billions, represents one of the largest cloud compute partnerships in the industry’s history.

Driving the Next Frontier of AI with TPUs

According to Anthropic, this expansion will dramatically increase computational power for training and scaling its Claude AI models. The partnership will bring over a gigawatt of compute capacity online by 2026, an unprecedented boost that underscores Anthropic’s growing demand.

Empirically, this partnership deepens Google Cloud’s foothold in the high-performance AI infrastructure space, rivaling NVIDIA and Amazon.

Meeting Explosive Enterprise Demand

Anthropic now serves over 300,000 business customers, with large enterprise accounts growing sevenfold in the past year. The expanded TPU capacity will support more rigorous testing, model alignment, and responsible AI deployment. These are key priorities as Anthropic scales Claude for Fortune 500 clients and AI-native startups alike.

“Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Krishna Rao, CFO of Anthropic. “Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry.”

“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO at Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood.”

It is important to note here that rival OpenAI recently signed multiple deals that may cost over $1 trillion to secure about 26 gigawatts of computing capacity, enough to power roughly 20 million U.S. homes. In perspective, one gigawatt of compute can cost roughly $50 billion.

A Multi-Cloud Strategy for Resilient AI Infrastructure

Despite the Google expansion, Anthropic remains committed to its multi-platform compute strategy, balancing workloads across Google’s TPUs, Amazon’s Trainium, and NVIDIA GPUs. The company continues to collaborate with Amazon Web Services on Project Rainier, a massive cluster integrating hundreds of thousands of AI chips across U.S. data centers.

This diversified approach allows Anthropic to avoid overdependence on a single vendor while maintaining flexibility in AI training and deployment.

What This Means For the Future of AI

For the broader AI ecosystem, Anthropic’s TPU expansion represents a pivotal moment. It signals the transition from experimental research to industrial-scale AI production, with a focus on efficient, verifiable, and responsible scaling.

Experts think Anthropic’s move reinforces the intensifying competition among hyperscalers, such as Google, Amazon, and Microsoft, as they court leading AI labs for high-value, long-term compute contracts.