Big U.S. technology companies are gearing up for an unprecedented wave of spending on artificial intelligence infrastructure, with capital expenditures projected to reach roughly $650 billion in 2026, driven largely by investments in data centers, AI chips, and cloud computing capacity, according to industry forecasts.
The massive outlay reflects the escalating competition in the AI race as these firms vie for dominance in machine learning, large language models, and next-generation computing services. Alphabet (Google’s parent), Microsoft, Amazon, and Meta Platforms Inc. are collectively expected to pour hundreds of billions of dollars into their infrastructure this year.
Amazon announced on Thursday that it plans to invest a whopping $200 billion this year. However, analysts from Morgan Stanley are predicting that the company could face a negative free cash flow of nearly $17 billion by 2026, while those at Bank of America are estimating an even larger deficit of $28 billion. In a recent filing with the SEC, Amazon informed investors that it might look to raise both equity and debt as it continues its expansion efforts.
Google is also set to nearly double its AI spending to about $175 billion to $185 billion, driven by demand for its Gemini AI services, cloud infrastructure, and advanced AI processors. Alphabet’s capital push comes amid strong revenue growth but mixed investor reactions to its high cost structure.
Meta is expected to allocate between $115 billion and $135 billion for data centers and AI initiatives, while Microsoft is estimated to contribute tens of billions of dollars more toward expanding its cloud and AI platforms.
AI research costs have ballooned in recent years, with the most compute-intensive model training runs estimated to exceed hundreds of millions of dollars and rising rapidly yearly. In many ways, AI has become critical to the future strategy of major technology companies, so much so that advancements in generative AI, cloud-based machine learning, and increasingly compute-intensive large language models require immense data center capacity, specialized chips, and cutting-edge infrastructure.