Amazon just made one of the biggest bets in tech history, pouring an additional $5 billion into artificial intelligence company Anthropic with the potential for up to $20 billion more in future funding, as the e-commerce giant races to dominate the AI revolution before it’s too late.
The investment builds on the $8 billion Amazon previously invested in Anthropic, bringing total potential investment to a staggering $33 billion as the two companies deepen their partnership around custom AI silicon, cloud infrastructure, and enterprise AI deployment. The additional $20 billion in future funding is tied to certain commercial milestones that could transform how businesses operate.
Anthropic has committed to spending more than $100 billion over the next ten years on AWS technologies, including current and future generations of Trainium, Amazon’s custom silicon for AI training, and tens of millions of Graviton cores, Amazon’s widely adopted CPU chip. Anthropic will secure up to 5 gigawatts of capacity to train and power advanced AI models, including significant Trainium3 capacity expected to come online this year.
The collaboration includes Trainium2, Trainium3, Trainium4, and the ability to purchase future generations of Trainium as they become available. Anthropic is already using AWS Trainium and Graviton to deliver scalable performance and cost efficiency across a broad range of generative AI workloads, allowing the company to accelerate growth with the scale, cost-efficiency, and security of AWS.
Both Trainium and Graviton are used by more than 100,000 customers each, with Amazon Bedrock, Amazon’s high-performance inference service, running most of its inference on Trainium. The partnership includes a meaningful expansion of international inference capabilities in Asia and Europe to better serve Claude’s growing international customer base.
AWS customers will be able to access the full Anthropic-native Claude console from within AWS through Claude Platform on AWS. The integration lets customers access Anthropic’s Claude Platform through their existing AWS account with no additional credentials, contracts, or billing relationships to manage, using the same AWS access controls and monitoring already in place.
“Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand,” said Andy Jassy, chief executive officer of Amazon. “Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”
Dario Amodei, chief executive officer and co-founder of Anthropic, said users increasingly consider Claude essential to how they work, creating pressure to build infrastructure keeping pace with rapidly growing demand.
“Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS,” he stated.
Since 2023, Amazon and Anthropic have worked together to accelerate generative AI adoption across industries, making it easier for customers to build, deploy, and scale AI applications solving real-world problems. Over 100,000 customers now run Anthropic Claude models on AWS, making Claude one of the most popular model families on Amazon Bedrock.
The partnership includes custom silicon collaboration, with Anthropic working closely with Annapurna Labs on developing and optimizing future Trainium chips. The company provides direct feedback from Claude training workloads to shape next-generation chip design for frontier AI models, benefiting other AWS customers. This has led to a strong engineering relationship with both teams communicating almost daily on everything from low-level optimization work to high-level architectural decisions for next-generation chips.
AWS collaborated with Anthropic to launch Project Rainier, one of the world’s largest AI compute clusters with nearly half a million Trainium2 chips. When it launched, Project Rainier was larger than any AI compute cluster in the world, and Anthropic is now actively using it to train and deploy Claude models for customers worldwide for AI-powered applications.
Project Rainier is being used to build and deploy future versions of Claude, with more compute dedicated to training the frontier model making it smarter and more accurate. The cluster serves as a template for deploying at scale the computational power needed to allow AI to tackle challenging problems and enable breakthroughs across fields from medicine to climate science.
Over 100,000 organizations of all sizes run Claude models, including Opus, Sonnet, and Haiku, on Amazon Bedrock. Lyft incorporated Claude via Amazon Bedrock to power its customer care AI assistant, delivering fast responses to common support issues and directing customers to human specialists when needed, reducing average customer service resolution time by 87% while resolving thousands of customer requests daily.
Pfizer is using Amazon Bedrock with Claude to help scientists search through approximately 20,000 documents generated per drug development project using voice commands and a chatbot, saving scientists 16,000 annual search hours while reducing infrastructure costs by 55%. Claude also powers experiences across additional AWS technologies, including Amazon Connect, Kiro, and Amazon Quick.
Anthropic continues choosing AWS as its primary training and cloud provider for mission-critical workloads. Amazon developers and engineers also have access to build with Claude models to improve customer experiences across Amazon’s businesses.
The deepened partnership reflects Amazon’s broader strategy to position AWS as the leading platform for enterprise AI deployment while securing preferential access to Anthropic’s advanced AI models.
The $5 billion immediate investment and potential $20 billion in future funding represent one of the largest corporate AI investments to date, signaling Amazon’s determination to win the AI arms race.

