The CPU Is Having a Comeback Moment in AI Data Centers, and It's About Power Efficiency
CPUs, long overshadowed by graphics processing units (GPUs) in AI discussions, are suddenly essential to solving data center power consumption challenges. A major funding announcement reveals that the computing industry is rethinking processor design for artificial intelligence workloads, moving away from power-hungry legacy architectures toward more efficient alternatives that can handle the complex coordination tasks GPUs weren't designed for .
Why Are CPUs Suddenly Critical for AI Data Centers?
As AI systems evolve toward more autonomous and complex models, known as agentic AI, the role of the CPU has fundamentally changed. While GPUs excel at processing massive amounts of data in parallel, CPUs orchestrate the overall system, managing data flow and coordination tasks that accelerators cannot handle efficiently. This division of labor means that data center performance and power consumption now depend heavily on CPU design choices .
The challenge is urgent. Modern AI data centers consume enormous amounts of electricity, and every percentage point of efficiency matters at hyperscale. Legacy CPU architectures, designed decades ago before AI became a dominant workload, were never optimized for the power-per-watt requirements of today's machine learning systems. This mismatch has created an opportunity for newer, more efficient processor designs to gain traction .
What's Driving the Shift Away From Traditional Processors?
SiFive, a company specializing in RISC-V processor intellectual property (IP), just raised $400 million in Series G funding to accelerate development of high-performance data center solutions. The funding round, led by Atreides Management and including investors like Apollo Global Management, NVIDIA, and T. Rowe Price Investment Management, values the company at $3.65 billion .
RISC-V is an open-standard processor architecture, meaning it isn't controlled by a single company like Intel or AMD. This openness allows hyperscalers and chip designers to customize processors for their specific needs without being locked into proprietary designs. For data center operators managing massive AI workloads, this flexibility translates directly into better performance and lower power consumption .
"Hyperscale customers have made it very clear that it is time to accelerate the availability of open standard alternatives for the data center. Their consistent ask is for customizable CPU solutions in IP form, that will enable them to meaningfully differentiate their data center compute solutions," said Patrick Little, SiFive Chairman and CEO.
Patrick Little, Chairman and CEO at SiFive
The funding will support three key areas of development. SiFive plans to expand research and development on scalar, vector, and matrix CPU architectures; build out software ecosystems including ports of CUDA, RedHat, and Ubuntu; and work directly with customers to streamline deployment, including integration with NVIDIA NVLink Fusion technology .
How Are Data Center Operators Addressing Power and Cooling Challenges?
The power efficiency problem extends beyond just processor design. In regions like the Gulf, where data center expansion is accelerating, the challenge becomes even more complex. The extreme summer heat in these areas creates seasonal peaks in both electricity and water demand, straining infrastructure designed for more moderate climates .
Gulf states are investing heavily in AI infrastructure, but the region's arid climate means cooling data centers requires massive amounts of water. By 2030, the United Arab Emirates' AI sector alone may require approximately 61 billion liters of water per year. In Saudi Arabia, data center power demand is expected to grow at a 29 percent compound annual growth rate, which will drive substantially higher water demand as well .
To address these challenges, Gulf regulators and technology companies are implementing a multi-pronged approach:
- Strategic Seasonal Storage: Investing in water storage systems specifically designed to handle peak summer demand for digital infrastructure cooling.
- Dual Efficiency Standards: Implementing requirements that measure both water and energy efficiency, rather than treating them as separate concerns.
- Non-Potable Water Use: Mandating that new data centers use treated sewage effluent and other non-potable sources instead of desalinated freshwater.
- Advanced Cooling Technologies: Deploying solar-powered desalination plants and energy-efficient cooling systems designed specifically for hot, arid climates.
Dubai is constructing what it claims will be the world's most energy-efficient desalination plant, powered entirely by solar energy. This represents a broader regional shift toward coupling new water infrastructure with cleaner power sources rather than relying on traditional gas-fired plants .
How Are Countries Building Sovereign AI Infrastructure With Efficient Processors?
Japan offers another model for addressing data center efficiency through processor innovation. Fujitsu is developing an AI inference device that combines its own neural processing units (NPUs) with custom CPUs, aiming to reduce Japan's dependence on foreign semiconductor technology while improving energy efficiency .
The Fujitsu-Monaka CPU is an ARM-based processor designed specifically for data centers and Japan's FugakuNEXT supercomputer. Currently fabricated by Taiwan Semiconductor Manufacturing Company (TSMC) using a 2-nanometer process, it will eventually be produced domestically by Rapidus using a 1.4-nanometer process developed in partnership with IBM .
In March 2026, Fujitsu began manufacturing AI servers at its Kasashima factory in Ishikawa Prefecture, equipped with both NVIDIA Blackwell GPUs and Fujitsu-Monaka CPUs. These servers are designed to maximize energy efficiency and reduce overall data center power consumption. The servers are being sold in both Europe and Japan, reflecting strategic technology partnerships between the regions .
"The CPU is suddenly exciting again, especially for applications in the data center. SiFive spotted this trend early and is well-positioned to benefit as the industry evolves," noted Dan Newman, CEO and Chief Analyst at The Futurum Group.
Dan Newman, CEO and Chief Analyst at The Futurum Group
Japan is also investing in alternative memory technologies to reduce GPU power consumption. In December 2025, Fujitsu and Riken decided to participate in efforts to develop Z-Angle Memory (ZAM), a high-capacity, high-bandwidth semiconductor technology with half the power consumption of current high-bandwidth memory used in NVIDIA and AMD GPUs. The goal is to create prototypes by March 2028 and begin mass production for AI data centers by March 2030 .
What Does This Mean for the Future of Data Center Design?
The convergence of these trends suggests that data center design is entering a new era. Rather than simply scaling up existing infrastructure, operators must now optimize for power efficiency at every level: processor architecture, cooling systems, water management, and memory technology. The $400 million investment in RISC-V development signals that the industry believes open-standard, customizable processors will be central to this transition .
Industry analysts estimate that the market opportunity for next-generation AI and agentic data center infrastructure could exceed $100 billion. Companies that can deliver both performance and power efficiency will capture significant value in this expanding market. The combination of SiFive's funding, Japan's sovereign AI initiatives, and Gulf region infrastructure investments suggests that data center efficiency is becoming a competitive advantage rather than a cost center .