Data centers are abandoning the electrical architecture Thomas Edison pioneered over a century ago because artificial intelligence has fundamentally changed how much power computing infrastructure needs to handle. As AI workloads push individual server racks from 10 kilowatts toward 1 megawatt, the traditional alternating current (AC) power systems that have powered data centers since their inception are becoming inefficient, expensive, and impractical. A growing number of vendors and hyperscalers are transitioning to direct current (DC) power distribution, with 800-volt DC systems emerging as the industry standard for next-generation AI facilities. Why Is AC Power Becoming a Problem for AI Data Centers? Traditional data centers rely on a complex chain of power conversions that made sense when computational racks consumed modest amounts of electricity. Power enters a facility as medium-voltage AC (between 1,000 and 35,000 volts), gets stepped down to low-voltage AC (480 or 415 volts) through a transformer, converts to DC inside an uninterruptible power supply (UPS) for battery backup, converts back to AC, and finally converts again to low-voltage DC (typically 54 volts) at the server where the actual computing chips require DC power. Each conversion step incurs energy losses. For traditional workloads, this inefficiency was tolerable. But AI has changed the equation dramatically. A conventional data center rack draws roughly 10 kilowatts; an AI-optimized rack now approaches 1 megawatt, a 100-fold increase. At that scale, the losses, electrical currents, and physical materials required become untenable. According to Nvidia, a single 1-megawatt rack could require as much as 200 kilograms of copper busbar alone. For a 1-gigawatt data center, that balloons to 200,000 kilograms of copper. "The double conversion process ensures the output AC is clean, stable and suitable for data center servers," explained Luiz Fernando Huet de Bacellar, Vice President of Engineering and Technology at Eaton. "But each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss." Luiz Fernando Huet de Bacellar, Vice President of Engineering and Technology at Eaton How Does 800-Volt DC Power Solve This Problem? The solution involves converting utility power directly from medium-voltage AC to 800 volts DC at the data center perimeter, then distributing that DC power throughout the facility. This eliminates most intermediate conversion steps, reducing the number of power supply units and cooling fans while improving system reliability and energy efficiency. The physics behind this improvement is straightforward: higher voltage reduces current demand, which lowers resistive losses and makes power transfer more efficient. Switching from 415-volt AC to 800-volt DC enables 85% more power to be transmitted through the same conductor size. This efficiency gain translates into concrete benefits for hyperscale operations: - Copper Reduction: 45% fewer copper requirements compared to traditional AC-to-DC conversion systems, reducing material costs and physical footprint - Energy Efficiency: 5% improvement in overall system efficiency, which compounds across gigawatt-scale facilities running continuously - Total Cost of Ownership: 30% lower total cost of ownership for gigawatt-scale facilities, making the infrastructure investment economically compelling "In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800 volts DC and then distributed throughout the facility on a DC bus," said Chris Thompson, Vice President of Advanced Technology and Global Microgrids at Vertiv. "At the rack, compact DC-DC converters step that voltage down for GPUs and CPUs." Chris Thompson, Vice President of Advanced Technology and Global Microgrids at Vertiv Which Companies Are Leading the Transition? Several major power infrastructure vendors are racing to commercialize 800-volt DC systems. Vertiv is developing an 800-volt DC ecosystem that integrates with Nvidia's Vera Rubin Ultra Kyber platforms, with commercial availability expected in the second half of 2026. Eaton is advancing a medium-voltage solid-state transformer (SST) that will serve as the core of its DC power distribution system. Delta has already released 800-volt DC in-row power racks delivering 660 kilowatts with 480 kilowatts of embedded battery backup. SolarEdge is working on a 99%-efficient SST paired with a native DC UPS and DC power distribution layer. Interestingly, higher-voltage DC data centers have already appeared in China, according to a report from technology advisory group Omdia. In North America, the Mt. Diablo Initiative, a collaboration among Meta, Microsoft, and the Open Compute Project, is conducting a 400-volt DC rack power distribution experiment. What Obstacles Stand in the Way of Industry-Wide Adoption? Despite the clear technical and economic advantages, the industry faces significant coordination challenges. Much of the sector is still focused on 400-volt DC systems rather than the more efficient 800-volt standard. Patrick Hughes, Senior Vice President of Strategy, Technical, and Industry Affairs for the National Electrical Manufacturers Association (NEMA), emphasized that the industry needs a complete, coordinated ecosystem spanning power electronics, protection systems, connectors, sensing equipment, and service-safe components that scale together rather than in isolation. Building this ecosystem requires substantial capital investment across multiple fronts: retooling manufacturing capacity for DC-specific equipment, expanding semiconductor and materials supply chains, and securing long-term demand commitments that justify major capital expenditures. Many suppliers are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments. Steps to Understanding DC Power Transition in Data Centers - Understand the Conversion Chain: Traditional AC data centers convert power multiple times before reaching chips, with each conversion losing energy; DC systems eliminate most of these steps by converting once at the facility perimeter - Recognize the Scale Problem: AI racks consuming 1 megawatt require 200 kilograms of copper busbar each; a 1-gigawatt facility needs 200,000 kilograms total, making material efficiency critical for cost and sustainability - Evaluate the Economic Case: 800-volt DC systems offer 45% copper reduction, 5% efficiency gains, and 30% lower total cost of ownership for gigawatt-scale facilities, making the infrastructure transition financially justified - Monitor Supply Chain Development: Standardization of connectors, protection systems, and safety frameworks is essential before widespread adoption; watch for NEMA guidance and industry consensus on 800-volt versus 400-volt standards The transition from AC to DC power represents one of the most significant infrastructure shifts in data center history, driven entirely by the computational demands of artificial intelligence. As AI workloads continue to grow, the economics of DC power distribution become increasingly compelling, but realizing that potential requires the entire industry to coordinate on standards, manufacturing capacity, and long-term commitments. The next few years will determine whether this transition happens smoothly or becomes a bottleneck for AI infrastructure expansion.