Amazon has significantly expanded its strategic collaboration with AI lab Anthropic, pledging an additional investment of up to USD25 billion on top of the USD8 billion already poured into the Claude maker. In return, Anthropic has committed to spending more than USD100 billion on Amazon Web Services (AWS) technologies over the next ten years, locking in up to 5 gigawatts of Trainium chip capacity to train and power its frontier AI models. The expanded pact deepens a partnership that began in 2023, broadens Claude’s international availability in Asia and Europe, and positions AWS as a central pillar of Anthropic’s infrastructure strategy amid a fierce arms race for AI compute.
Key Overview
- Amazon will invest an additional USD5 billion into Anthropic immediately, with up to USD20 billion more tied to commercial milestones, on top of its existing USD8 billion stake.
- Anthropic has committed to spending more than USD100 billion on AWS technologies over the next decade, including current and future generations of Trainium custom silicon and tens of millions of Graviton CPU cores.
- The Claude maker will secure up to 5 gigawatts of compute capacity across Trainium2, Trainium3, Trainium4, and future chip generations.
- Nearly 1 gigawatt of combined Trainium2 and Trainium3 capacity is expected to come online by the end of 2026.
- More than 100,000 customers already run Claude models on AWS through Amazon Bedrock, and Project Rainier — one of the largest AI compute clusters in the world — is set to expand under the new agreement.
- The deal includes a “meaningful expansion” of international inference in Asia and Europe to better serve Claude’s growing global customer base.
- Amazon CEO Andy Jassy framed the deal as validation of Amazon’s custom silicon strategy, while Anthropic CEO Dario Amodei said the collaboration is essential for keeping pace with surging demand for Claude.
Markets move fast; don’t get left behind. We’ve paired the Serrari Group Market Index with a curated Marketplace and a comprehensive Wealth Builder Platform to ensure you have the data—and the skills—to act on it.
Amazon Pours More Fuel on the AI Fire
Amazon has dramatically escalated its bet on generative AI, announcing a landmark expansion of its strategic collaboration with Anthropic that bundles together a headline-grabbing equity injection and one of the largest cloud commitments ever disclosed. Under the terms unveiled on Monday, Jeff Bezos’ company will put USD5 billion into the Claude maker immediately, with up to USD20 billion to follow tied to specific commercial milestones. That comes on top of the USD8 billion Amazon had previously poured into Anthropic in two earlier tranches dating back to 2023.
In return, Anthropic has made an equally eye-watering commitment in the other direction. The San Francisco-based AI lab will spend more than USD100 billion on Amazon Web Services technologies over the next ten years, locking in up to 5 gigawatts of Trainium-powered compute capacity to train and deploy its frontier Claude models. For context, one gigawatt is roughly the output of a large nuclear power plant, meaning Anthropic has effectively locked up the computing equivalent of five nuclear plants’ worth of capacity on AWS infrastructure.
Custom Silicon Takes Centre Stage
At the heart of the deal is Amazon’s bet on its in-house AI chips. Anthropic’s USD100 billion commitment explicitly covers current and future generations of Trainium — Amazon’s custom AI accelerator — as well as tens of millions of Graviton cores, Amazon’s widely adopted general-purpose CPU chip. The scope stretches across Trainium2, Trainium3, Trainium4, and the option to purchase future generations of the custom silicon as they become available.
The two companies said that nearly 1 gigawatt of combined Trainium2 and Trainium3 capacity is expected to come online by the end of this year, with significant Trainium3 capacity specifically expected to begin deployment in 2026. Anthropic currently uses more than 1 million Trainium2 chips to train and serve Claude.
Amazon CEO Andy Jassy framed the agreement as vindication of the company’s decade-long investment in designing its own chips through its Annapurna Labs subsidiary. “Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand,” Jassy said in the announcement. He added that Anthropic’s decade-long commitment to AWS Trainium “reflects the progress we’ve made together on custom silicon.”
Jassy’s confidence has numbers behind it. In his most recent annual shareholder letter, the Amazon chief revealed that Amazon’s custom chip business — spanning Graviton, Trainium, and Nitro — has doubled to a USD20 billion annualised revenue run rate, growing at triple-digit rates year-on-year. Jassy has even hinted at selling racks of Trainium chips to third parties in the future as demand continues to outstrip supply.
Project Rainier and the Largest AI Cluster on Earth
The expanded partnership builds on Project Rainier, an infrastructure initiative the two companies have been jointly developing since late 2024. Named after the 4,392-metre stratovolcano visible from Seattle on clear days, Project Rainier came fully online in late 2025 and features nearly 500,000 Trainium2 chips distributed across multiple data centres in the United States.
The cluster delivers more than five times the compute power Anthropic used to train its previous generation of AI models, and AWS has projected that the Claude maker will scale to more than one million Trainium2 chips by year-end. The architecture is built around what AWS calls “EC2 UltraClusters” of Trainium2 UltraServers, with each UltraServer combining four physical servers that each contain 16 Trainium2 chips, interconnected by Elastic Fabric Adapter networking technology. Project Rainier spans multiple AWS data centres rather than being concentrated in a single facility, including a site in St. Joseph County, Indiana, where Amazon is investing roughly USD11 billion in buildout.
The new Amazon-Anthropic deal ensures Project Rainier will not remain a one-off. Anthropic and Amazon’s Annapurna Labs will continue collaborating on next-generation custom silicon, with the AI lab providing direct feedback from Claude training workloads to shape future Trainium chip designs — a feedback loop described by the companies as benefitting the broader AWS customer base.
Context is everything. While you follow today’s updates, use the Serrari Group Market Index and Marketplace to spot emerging shifts. Need to sharpen your edge? Our Wealth Builder Platform turns these insights into a professional-grade strategy.
Going Global: Asia and Europe on the Map
Beyond the core compute commitments, the two companies also flagged a “meaningful expansion” of international inference capacity in Asia and Europe to better serve Claude’s growing overseas customer base. That geographic push comes as Anthropic has increasingly been targeting markets outside the United States. The company opened its first India office in Bengaluru earlier in 2026, and it has been battling infrastructure strain in peak hours as demand balloons globally.
In its announcement of the Amazon deal, Anthropic acknowledged that surging consumer demand has strained its infrastructure, at times affecting reliability during peak hours — a pressure point the expanded AWS arrangement is explicitly designed to relieve. As part of the deal, the full Claude Platform will be made available directly within AWS, letting customers access Anthropic’s tools through their existing AWS account, billing, and security controls — a deeper integration than offering Claude solely through the Amazon Bedrock marketplace.
More than 100,000 customers currently run Claude models on AWS, making it one of the most popular model families on Amazon Bedrock, Amazon’s generative AI platform. Jassy in his shareholder letter earlier this month singled out demand for Trainium chips — from customers including Anthropic, OpenAI, Apple, and Uber — as evidence the company’s massive capital expenditure plans are underwritten by real, binding customer commitments rather than speculation.
What Amodei Said
Anthropic CEO and co-founder Dario Amodei offered his own framing of the partnership. “Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand,” Amodei said in the announcement, adding that the collaboration with Amazon will allow Anthropic to continue advancing AI research while delivering Claude to its expanding customer base.
That pressure is not hypothetical. Anthropic’s annualised revenue has topped USD30 billion, up from roughly USD9 billion at the end of 2025, with growth driven by enterprise, developer, and consumer adoption of Claude across free, Pro, Max, and Team tiers. Business clients account for the bulk of that mix, and subscriptions to Claude Code in particular have exploded since its general availability launch in May 2025.
Hyperscaler Hedging: Anthropic’s Multi-Cloud Strategy
The expanded AWS deal is the most dramatic — but by no means the only — infrastructure pact Anthropic has signed in recent months. The Claude maker has pursued an unmistakably multi-cloud strategy, inking deals with every major hyperscaler even as it deepens its Amazon ties.
In November 2025, Microsoft announced a surprise alliance with Anthropic and Nvidia that included a USD5 billion investment from Microsoft and up to USD10 billion from Nvidia, in exchange for Anthropic committing to spend USD30 billion on Microsoft’s Azure cloud platform. That deal also made Claude models available on Microsoft Foundry, making Claude the first foundation model family to be accessible across all three major cloud providers.
Google, too, was an early investor in Anthropic and the company has expanded its contract with Google Cloud to access up to one million of Google’s Tensor Processing Units. Anthropic models now run on Nvidia GPUs, AWS Trainium, and Google’s TPUs, reflecting a deliberate hedging strategy as compute supply becomes the industry’s most contested resource.
Those cumulative commitments — USD100 billion to AWS, USD30 billion to Azure, and the Google TPU deal — push Anthropic’s total disclosed compute spending obligations well past USD130 billion for the coming decade.
The OpenAI Shadow
Amazon’s Anthropic expansion also has to be read in light of the company’s broader AI positioning. Just two months before the Anthropic announcement, Amazon unveiled a USD50 billion investment in OpenAI as part of a record-breaking USD110 billion funding round for the ChatGPT maker. That deal made AWS the exclusive third-party cloud distribution provider for OpenAI Frontier and expanded an existing USD38 billion agreement by an additional USD100 billion over eight years, with OpenAI committing to roughly two gigawatts of Trainium capacity.
In other words, Amazon is now simultaneously underwriting the two leading frontier AI labs — a posture that stands in sharp contrast to Microsoft’s historically OpenAI-centric strategy. The Anthropic deal, announced just weeks after OpenAI publicly suggested Anthropic had made a strategic misstep by not acquiring enough compute, reads as a direct rebuttal to that claim.
Valuation, IPO Chatter, and What Comes Next
The new Amazon investment is being made at Anthropic’s latest valuation of USD380 billion, which the company formalised in February 2026 with a USD30 billion Series G funding round led by GIC and Coatue — the second-biggest private financing round on record for a technology company. Venture capital offers have since reportedly pushed Anthropic’s private valuation toward USD800 billion, on par with arch-rival OpenAI, with IPO speculation swirling around both labs.
Both Anthropic and OpenAI have been under pressure to demonstrate the long-term compute commitments that public market investors will expect before any listing. The Amazon deal is a strong signal on that front, and it arrives against the backdrop of Amazon itself planning a record USD200 billion capital expenditure budget for 2026, most of it directed at AI infrastructure.
For Amazon, the agreement cements AWS’s position as an indispensable partner to the AI lab founded in 2021 by a group of former OpenAI researchers and executives. For Anthropic, it delivers the compute headroom it needs to keep Claude competitive with GPT-class models and to meet escalating enterprise demand. And for the broader market, the pact is another sign that the AI infrastructure build-out — measured in gigawatts, nuclear-plant equivalents, and hundreds of billions of dollars — is only accelerating.
Your financial future isn’t something you wait for—it’s something you build.
The real question is: when do you begin?
Move beyond simply staying informed.
Navigate the markets with clarity—track trends through the Serrari Group Market Index, uncover opportunities in the Serrari Marketplace, and build practical knowledge with our Curated Wealth Builder Platform.
Stay connected to what truly matters.
Get daily insights on macro trends and financial movements across Kenya, Africa, and global markets—delivered through the Serrari Newsletter.
Growth opens doors.
Advance your career through professional programs including ACCA, HESI A2, ATI TEAS 7 , HESI EXIT , NCLEX – RN and NCLEX – PN, Financial Literacy!🌟—designed to move you forward with confidence.
See where money is flowing—clearly and in real time.
Track Money Market Funds, Treasury Bills, Treasury Bonds, Green Bonds, and Fixed Deposits, alongside global and African indexes, key economic indicators, and the evolving Crypto and stablecoin landscape—all within Serrari’s Market Index.