Why AI Companies Are Looking to Space to Solve Earth's Power Crisis
The power demands of artificial intelligence are hitting a hard ceiling on Earth, and one startup is proposing a radical solution: move AI data centers into space. Orbital, a Los Angeles-based company backed by a16z Speedrun, has announced a 2027 launch date for its first space-based AI data center test mission, positioning low Earth orbit as a potential answer to the energy constraints that are now limiting how fast AI infrastructure can grow .
What's Driving AI Companies to Look Beyond Earth?
The problem is straightforward but urgent. Data centers that train and run AI models consume enormous amounts of electricity and require sophisticated cooling systems to prevent hardware from overheating. As demand for AI compute accelerates, access to power and cooling capacity has become the bottleneck, not the availability of chips themselves . Euwyn Poon, CEO and Founder of Orbital, explained the core challenge: "The energy ceiling on AI isn't theoretical, it's a real constraint that will impede the advancement of intelligence. This is the solution."
Euwyn Poon, CEO and Founder of Orbital
The shift reflects a growing recognition across the AI industry that traditional terrestrial infrastructure may not be able to keep pace with demand. Some technology companies are already purchasing nuclear power plants to sustain AI growth, according to researchers at Arizona State University . Meanwhile, European chip startups are racing to develop more efficient inference processors, with companies like Euclyd claiming they can deliver 100 times higher power efficiency for AI inference compared to Nvidia's latest generation chips .
How Would Space-Based AI Data Centers Actually Work?
Orbital's approach leverages two fundamental advantages of low Earth orbit: continuous solar power and the ability to radiate heat directly into space. Unlike terrestrial data centers that depend on electrical grids and cooling systems, satellites in orbit can harness uninterrupted sunlight and use the vacuum of space as a natural heat sink . The company is building a network of satellites equipped with Nvidia-powered servers designed to operate in this environment.
The Orbital-1 mission, scheduled to launch on a SpaceX Falcon 9 in April 2027, will test whether this concept works in practice. The mission will validate sustained GPU performance in orbit, including radiation resilience and the ability to run commercial AI inference workloads . Inference, the process of running trained AI models to generate predictions or answers, is less latency-sensitive than model training, making it better suited for distributed satellite networks where compute can be scaled across multiple nodes rather than requiring tightly coupled clusters.
Steps to Understanding the Space-Based AI Infrastructure Opportunity
- Identify the constraint: Power and cooling capacity, not chip availability, are now the limiting factors for scaling AI infrastructure on Earth, forcing companies to seek alternative solutions.
- Recognize the orbital advantage: Low Earth orbit provides continuous solar energy and natural radiative cooling into space, eliminating dependence on terrestrial power grids and traditional cooling systems.
- Understand the workload fit: Space-based infrastructure targets AI inference tasks, which can be distributed across satellite constellations without the tight coupling required for model training.
- Evaluate the timeline: Orbital's 2027 test mission will determine whether distributed AI compute can operate reliably in orbit and support commercial workloads at scale.
Andrew Chen, General Partner at a16z Speedrun, noted the ambition behind the investment: "Speedrun backs founders to explore ambitious ideas, the harder the problem, the better. Orbital is taking on AI's biggest constraint with a bold and radical idea."
Andrew Chen, General Partner at a16z Speedrun
Why Is the Chip Industry Also Racing to Solve the Power Problem?
While Orbital pursues an orbital solution, the semiconductor industry is pursuing efficiency gains on Earth. European chip startups are attracting significant investment as they develop alternatives to Nvidia's general-purpose GPUs, which were originally designed for video games and adapted for AI . These new companies argue that existing GPU architecture wasn't built for AI inference, where the priorities differ from training.
Euclyd, a Dutch startup backed by the former CEO of chipmaking equipment giant ASML, is in discussions with investors for a funding round of at least 100 million euros, or approximately $118 million . The company claims its chip architecture can deliver 100 times higher power efficiency for inference by processing data in multiple places simultaneously, rather than constantly moving data through memory as GPUs do . Other European startups pursuing similar approaches include Optalysys, planning a $100 million-plus fundraise, and Fractile and Arago, both reportedly fundraising for nine-figure rounds .
At Arizona State University, researcher Aman Arora is exploring a different approach using field-programmable gate arrays, or FPGAs, which are chips that can be reconfigured after they leave the factory . Unlike traditional processors with fixed internal wiring, FPGAs can be reprogrammed to perform specific tasks directly, eliminating the overhead of instruction fetching and decoding . Arora explained the advantage: "With an FPGA, there is no instruction decode, no instruction fetch happening. So no overhead."
Arora
What Does This Mean for the Future of AI Infrastructure?
The convergence of these efforts, from space-based compute to specialized chips to reconfigurable hardware, suggests that the AI industry is moving away from the idea of a single, ever-faster general-purpose processor. Instead, the future appears to involve a toolkit of specialized systems designed for specific tasks but flexible enough to evolve . This shift also has environmental implications. Instead of discarding hardware every few years, reconfigurable chips like FPGAs can be repeatedly repurposed, reducing both energy use and the need for new manufacturing .
The stakes are high. European AI chip startups have raised $800 million so far in 2026, compared with $4.7 billion for their U.S. counterparts, according to Dealroom data cited in the sources . Yet the geopolitical tailwinds are clear, with U.S. export controls on advanced chips, concentration risk around Taiwan's TSMC, and a genuine European sovereign compute imperative pushing capital toward homegrown silicon .
If Orbital's 2027 test mission succeeds, it could fundamentally reshape how AI infrastructure is deployed, particularly as energy demand from data centers continues to rise faster than available supply on Earth. The company is already filing with the Federal Communications Commission as it works toward deploying a wider constellation of AI compute satellites . For now, the race is on, both in orbit and on the ground, to solve the energy ceiling that is beginning to constrain AI's growth.