Meta is taking control of its AI infrastructure by partnering with Arm to develop custom processors specifically designed for data centers running massive artificial intelligence (AI) workloads. The collaboration will produce multiple generations of central processing units (CPUs), which are the processors that make AI training and inference possible. The first chip, called the Arm AGI CPU, is Arm's first data center processor built specifically for the AI era. Why Is Meta Building Its Own AI Chips? As AI systems grow larger and more complex, traditional computer processors are becoming a bottleneck. Meta's data centers are outgrowing the abilities of legacy CPUs, meaning the company needs hardware purpose-built for the demands of modern AI. By developing custom silicon alongside Arm, Meta gains a competitive advantage in delivering AI experiences to billions of users worldwide while optimizing how efficiently its data centers operate. The Arm AGI CPU delivers faster performance per rack far more efficiently than older processors, addressing a critical challenge in AI infrastructure. This is not just about speed; it is about fitting more computing power into the same physical space while using less electricity. For a company operating data centers at gigawatt scale, these efficiency gains translate directly to lower costs and reduced environmental impact. "Delivering AI experiences at global scale demands a robust and adaptable portfolio of custom silicon solutions, purpose-built to accelerate AI workloads and optimize performance across Meta's platforms. We worked alongside Arm to develop the Arm AGI CPU to deploy an efficient compute platform that significantly improves our data center performance density and supports a multi-generation roadmap for our evolving AI systems," said Santosh Janardhan, Head of Infrastructure at Meta. Santosh Janardhan, Head of Infrastructure, Meta How Will Meta's Custom Chips Work Alongside Existing Hardware? Meta is not replacing its entire hardware stack. Instead, the Arm AGI CPU will work alongside the company's existing custom MTIA (Meta Training and Inference Accelerator) silicon. This layered approach allows Meta to optimize different parts of its AI pipeline with specialized hardware. The CPU handles general-purpose computing tasks, while specialized accelerators handle the most demanding AI training and inference work. - Co-development Model: Meta serves as the lead partner and co-developer for the Arm AGI CPU, ensuring the chip meets Meta's specific infrastructure needs and performance requirements. - Multi-generation Roadmap: The partnership includes plans for multiple generations of CPUs, allowing Meta to continuously improve performance as AI systems evolve over time. - Open Ecosystem Approach: Meta will release board and rack designs for the Arm AGI CPU under the Open Compute Project, making the technology available to the broader AI industry. This open approach is significant because it allows other companies to benefit from Meta's infrastructure innovations. By sharing designs, Meta positions itself as a leader in AI infrastructure while supporting the entire ecosystem's ability to scale AI systems efficiently. What Does This Mean for the AI Industry? Arm's CEO emphasized that this partnership represents a fundamental shift in how data center infrastructure is designed. The Arm AGI CPU reflects the next phase of Arm's computing platform, expanding beyond traditional server processors into territory optimized for what the industry calls "agentic AI" deployments, meaning AI systems that can take autonomous actions. "AI is reshaping how data center infrastructure is built and deployed at scale. Our collaboration with Meta to co-develop the Arm AGI CPU reflects the next phase of the Arm compute platform, expanding into delivering production silicon CPUs optimized for large-scale agentic AI deployments. Together, we are combining Arm's foundation of high-performance, power-efficient computing with Meta's infrastructure expertise to support the next generation of AI systems," stated Rene Haas, CEO of Arm. Rene Haas, CEO, Arm The partnership signals that major tech companies are moving beyond relying on off-the-shelf processors. Building custom silicon has become essential for companies operating at Meta's scale, where even small efficiency improvements across millions of servers translate to massive cost savings and competitive advantages. This trend is likely to accelerate as AI systems become more central to how technology companies operate. How to Understand Meta's Hardware Strategy - Custom Silicon Portfolio: Meta is building a diverse set of specialized processors, each optimized for different tasks within its AI infrastructure, rather than relying on a single type of chip. - Performance Density Focus: The primary goal is fitting more computing power into limited physical space while reducing power consumption, which directly impacts operating costs and environmental footprint. - Long-term Roadmap: Rather than a one-time chip release, Meta and Arm are planning multiple generations, ensuring the hardware evolves alongside AI technology advances. The Arm AGI CPU will be available to the broader AI ecosystem through Arm, meaning other companies can purchase and deploy the same chip in their data centers. This availability extends beyond Meta, though Meta's role as lead partner and co-developer ensures the company benefits from the innovation first. Meta's investment in custom silicon reflects a broader industry trend where companies like Google, Amazon, and others have already developed proprietary chips to power their AI and cloud services. By partnering with Arm rather than building entirely from scratch, Meta gains access to Arm's expertise in processor design while maintaining control over the specifications that matter most for its AI workloads. This hybrid approach balances innovation speed with the proven reliability of established chip architecture.