The Hidden Efficiency Gap: Why Data Centers Still Waste 15% of Their Power
Data centers are losing billions of dollars in wasted electricity every year, and the culprit isn't cooling or servers,it's the power distribution system itself. Traditional alternating current (AC) power systems, which have powered data centers for decades, convert electricity multiple times before it reaches a GPU or processor. Each conversion wastes energy. Direct current (DC) systems, by contrast, can reduce total power system losses from roughly 25% down to 10%, meaning more of the electricity you pay for actually powers your computing .
The problem is becoming urgent. As artificial intelligence workloads explode, GPU power consumption is climbing steeply, and rack-level power demands are projected to keep rising through 2028 . Data centers globally consumed about 415 terawatt-hours of electricity in 2024, roughly 1.5% of all global electricity use. The International Energy Agency projects that figure will more than double to around 945 terawatt-hours by 2030, with AI as the primary driver . In the United States alone, data centers reached 176 terawatt-hours in 2023, about 4.4% of total U.S. electricity consumption, with projections reaching 325 to 580 terawatt-hours by 2028, equivalent to 6.7% to 12.0% of U.S. electricity use .
At Data Center World 2026, LS ELECTRIC America is showcasing how DC grid solutions could address this efficiency crisis. The company's approach focuses on replacing traditional AC power distribution with DC architectures that minimize energy conversion stages and improve overall electricity utilization from the grid all the way to individual server racks .
Why Does Power Distribution Matter More Than You'd Think?
Most people assume data center energy waste happens in cooling systems or from idle servers. While those are real problems, they miss a bigger picture. The electrical chain that delivers power to your servers involves multiple conversion steps: from the grid to transformers, through uninterruptible power supplies (UPS) systems, into distribution switchgear, and finally down to individual racks. Each step introduces losses .
LS ELECTRIC's DC grid concept addresses this by eliminating redundant conversions. Instead of converting power from AC to DC multiple times, a unified DC architecture uses specialized equipment like solid-state transformers, DC/DC converters, and solid-state circuit breakers to maintain direct current throughout the entire energy chain . The result is that more electricity reaches your GPUs instead of being dissipated as heat in power conversion equipment.
This matters because energy costs dominate data center operating budgets. When you're losing 15% of your power to inefficient distribution, you're essentially paying for electricity you never use. For a large AI data center consuming hundreds of megawatts, that translates to millions of dollars annually in wasted spending.
How to Reduce Data Center Energy Waste Across Multiple Layers?
Operators who want to cut electricity consumption and carbon emissions in 2026 have several practical levers they can pull. The most effective programs treat energy as a measurement and control problem across both IT infrastructure and facility systems:
- Right-size IT workloads: Identify idle or underused servers, storage arrays, and network ports, then safely remove or consolidate them. Many facilities silently drift toward lower utilization while cooling systems run at full capacity.
- Upgrade cooling strategy for high-density reality: Move beyond traditional air cooling for the hottest racks by adopting targeted liquid cooling solutions, such as direct-to-chip cooling or rear-door heat exchangers, while keeping the rest of the room optimized for air-based systems.
- Reduce electrical chain losses: Replace aging AC distribution with modern DC systems, upgrade UPS equipment, and optimize medium and low-voltage switchgear to minimize energy conversion overhead.
- Implement smart power management: Use CPU and GPU power state controls, scheduled batch windows, and dynamic voltage and frequency scaling (DVFS) to reduce peak demand during critical periods.
- Adopt standardized measurement frameworks: Use ISO/IEC 30134-2:2026 for Power Usage Effectiveness (PUE) and ISO 50001 for continuous energy management improvement across the facility.
Research from Georgia Institute of Technology shows that underclocking GPUs (reducing their processing speed) can cut power consumption by 40% with only a 22% performance loss, suggesting that demand-response interventions are technically feasible . Pre-cooling data center equipment during off-peak hours is another immediate win, since cooling accounts for 20% to 40% of total data center energy use .
What's Stopping Data Centers From Switching to DC Power?
If DC systems are so much more efficient, why hasn't the industry already switched? The answer reveals a gap between technical capability and economic incentive. Data center operators face real barriers to adoption, even when the long-term math favors change.
First, upfront capital costs are significant. Retrofitting an existing data center with DC power distribution requires replacing transformers, switchgear, UPS systems, and rack-level power delivery equipment. For a facility that already has functioning AC infrastructure, the business case depends on how long the operator plans to keep the facility running and how much electricity costs in their region.
Second, reliability concerns loom large. Data center operators are extremely risk-averse when it comes to power systems, since even brief outages can cost millions in lost revenue. Switching to newer DC technology introduces unfamiliar failure modes and requires staff retraining. Many operators prefer the devil they know.
Third, electricity costs alone don't always justify the investment. Research from Georgia Tech found that data center operators generally do not change their behavior in response to electricity price signals, because job revenue far outweighs energy costs under normal conditions . A GPU rented at $2 per hour consumes only $0.04 worth of electricity at average prices, making efficiency improvements unattractive unless prices spike dramatically .
This economic reality suggests that price-based incentives alone are unlikely to drive widespread adoption of DC systems or other efficiency measures. Instead, researchers propose alternative strategies, such as dynamic pricing models that allow users to specify acceptable trade-offs in job characteristics in exchange for discounts, or negotiated reductions in interconnection costs for data centers that commit to reducing demand during critical grid periods .
What Metrics Should Data Centers Track to Measure Real Efficiency?
Measuring data center efficiency is trickier than it sounds. The most common metric, Power Usage Effectiveness (PUE), divides total facility energy by IT energy to show how much overhead you spend delivering computing power. A PUE of 1.5 means you use 1.5 watts of total facility power for every watt of IT power. Industry average PUE has remained relatively flat in recent years, around 1.56 in 2024, but newer designs often achieve 1.3 or better, meaning many legacy facilities have significant headroom for improvement .
However, PUE alone can be misleading. A facility can improve its PUE while total electricity consumption keeps rising, and PUE ignores how efficiently the IT layer itself is running. In 2026, the most effective programs track multiple metrics across the entire energy chain:
- Water Usage Effectiveness (WUE): Measures liters of water used per kilowatt-hour of IT energy. Water is now a first-order constraint for data center siting and operations, not a footnote, especially in water-stressed regions.
- Water Usage Impact (WUI): Adjusts water efficiency for local water stress factors, helping prevent outcomes where a facility looks efficient on paper but operates in a water-stressed basin.
- Carbon intensity metrics: Track both the carbon intensity of the electricity grid (kilograms of CO2 per kilowatt-hour) and total facility emissions in metric tons of CO2 equivalent, enabling carbon-aware operations.
- IT utilization and efficiency: Measure how much useful compute you get per kilowatt-hour. AI and high-density racks make IT-side optimization a major lever again, but this metric is harder to standardize and requires workload visibility.
The challenge is that significant data gaps remain regarding utilization patterns and workload energy profiles across the industry . Without greater transparency, it is difficult to quantify flexibility potential or design effective incentive structures that would encourage operators to adopt more efficient technologies like DC power systems.
As AI workloads continue to expand and electricity demand from data centers approaches 10% of U.S. consumption by decade's end, the pressure to rethink power distribution will only intensify. DC grid solutions offer a clear technical pathway to higher efficiency and reduced operational costs, but realizing that potential will require not just better technology, but also new business models and market mechanisms that align operator incentives with grid and climate goals .