Broadcom's 3.5 Gigawatt TPU Deal Reveals the Real Winner in AI Chip Wars

Broadcom has emerged as the critical middleman in the AI chip race, securing a deal to supply 3.5 gigawatts of Google TPU (Tensor Processing Unit) capacity to Anthropic starting in 2027. This arrangement, disclosed in a securities filing, reveals an often-overlooked truth about the AI infrastructure boom: the companies designing chips matter less than the companies actually building them .

Why Is Broadcom Becoming the Kingmaker of AI Chips?

Broadcom's role in this deal illustrates a fundamental shift in how AI companies approach computing power. Rather than designing chips from scratch, tech giants like Google are outsourcing the actual manufacturing partnership to specialized firms. Google owns the TPU architecture and software, but Broadcom handles the silicon implementation, converting Google's designs into manufacturable chip layouts while supplying high-speed networking components, power management systems, and packaging .

This division of labor has made Broadcom indispensable. The company now serves as the implementation partner for two of the three largest U.S. frontier AI model developers. Beyond Anthropic, Broadcom is executing a separate $10 billion custom silicon program with OpenAI, announced as a 10 gigawatt co-development effort last October . Analysts at Mizuho estimated that Broadcom would record $21 billion in AI revenue from Anthropic alone in 2026 and $42 billion in 2027, though the securities filing did not contain specific dollar amounts .

What Does This Mean for Anthropic's Growth Trajectory?

Anthropic's expansion of TPU capacity comes as the company experiences explosive growth. The AI company announced that its annualized revenue run rate has now surpassed $30 billion, up from approximately $9 billion at the end of 2025 . More than 1,000 business customers are now spending over $1 million annually on Anthropic's services, double the figure from February .

"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base," said Krishna Rao, Anthropic's Chief Financial Officer.

Krishna Rao, Chief Financial Officer at Anthropic

The new capacity arrangement is contingent on Anthropic's continued commercial performance, meaning the company must maintain its current growth trajectory to unlock the full 3.5 gigawatts . This new TPU capacity supplements, rather than replaces, Anthropic's existing partnership with Amazon Web Services. AWS remains Anthropic's primary cloud and training partner under Project Rainier, which uses Trainium 2 chips in a supercluster located in Indiana .

How the AI Chip Supply Chain Actually Works

  • Design Phase: Google owns the TPU architecture and develops the software stack that runs on these chips, giving the company control over the fundamental capabilities and performance characteristics.
  • Implementation Phase: Broadcom converts Google's theoretical designs into actual silicon layouts that can be manufactured, handling the complex engineering of converting architecture into physical chip specifications.
  • Manufacturing Phase: Taiwan Semiconductor Manufacturing Company (TSMC) handles the actual fabrication of the chips based on Broadcom's designs, producing the physical silicon wafers.
  • Integration Phase: Broadcom supplies additional components including high-speed SerDes (serializer-deserializer chips for data transmission), power management systems, and packaging solutions that make the chips functional in data centers.

This multi-layer approach has become the standard model for frontier AI infrastructure. Rather than any single company controlling the entire pipeline, specialized firms handle specific expertise areas. TSMC manufactures the chips, Broadcom bridges the gap between design and manufacturing, and Google retains architectural control .

The broader AI chip landscape remains diverse, with multiple companies pursuing different strategies. Both Anthropic and OpenAI continue to draw heavily on NVIDIA GPUs through cloud providers including AWS, Google Cloud, and Microsoft Azure . OpenAI has separately committed to 6 gigawatts of AMD GPU capacity, with the first gigawatt expected in the second half of this year .

The Broadcom-Anthropic-Google arrangement extends Anthropic's $50 billion American AI infrastructure commitment announced in November 2025, with the vast majority of the new infrastructure located in the United States . This deal represents not just a supply agreement but a strategic bet that custom silicon partnerships will become the dominant model for scaling AI compute in the coming years.