The Hidden Cost of AI: Why Regulators Are Ignoring Environmental Damage

More than 200 laws regulating artificial intelligence have been developed across over 100 countries, yet nearly all of them ignore a critical problem: the massive environmental damage caused by AI systems. From manufacturing energy-hungry computer chips to training models that consume hundreds of thousands of liters of freshwater, the AI industry is creating a sustainability crisis that regulators have largely failed to address .

What Environmental Damage Is AI Actually Causing?

The environmental footprint of AI spans the entire lifecycle of the technology. When companies manufacture the specialized computer chips called graphics processing units (GPUs) that power AI training, they must extract rare earth elements from the ground. This extraction contaminates soil and water, pollutes the air, and destroys forest habitats. The problem only intensifies once training begins .

Training large language models is extraordinarily resource-intensive. Researchers estimated in 2025 that training GPT-3, a large language model released by OpenAI in 2020, consumed approximately 700,000 liters of freshwater for electricity generation and data center cooling alone. Even as individual AI models become more energy-efficient, the overall energy consumption continues to rise because models are getting larger and AI is proliferating across industries. Most troubling: the energy consumed when people actually use AI systems vastly outweighs the energy used during the initial training phase .

The disposal of AI hardware creates another environmental burden. As companies retire outdated GPUs and servers, the resulting electronic waste accumulates without adequate recycling infrastructure in many regions. This e-waste contains toxic materials and valuable materials that could be recovered but often are not .

Why Are Regulators Missing This Problem?

When the European Union's AI Act came into force on August 1, 2024, it was celebrated as the world's first comprehensive law on artificial intelligence. However, the law's approach reveals a fundamental blind spot. The AI Act acknowledges some environmental consequences and requires that "AI systems are developed and used in a sustainable and environmentally friendly manner." Yet the enforcement mechanisms are weak. Companies must disclose energy consumption data only when requested by the AI Office, and preparing codes of conduct to minimize environmental impact is voluntary, not mandatory .

The United Kingdom's approach is even more limited. The UK has no AI-specific legislation and instead relies on existing laws to regulate AI. The government's 2023 white paper on AI regulation explicitly excludes sustainability from its scope, stating that while sustainability is "an important issue to consider," it falls "outside of the scope of our proposals for a new overarching framework for AI regulation" .

Across the globe, the pattern is consistent: privacy, bias, disinformation, and cybersecurity dominate regulatory discussions, while environmental impacts remain largely invisible in policy frameworks. This gap exists partly because the environmental damage is difficult to measure. Technology companies lack transparency about their operations, making it hard for regulators to establish baseline data on energy consumption, water usage, and carbon emissions .

How Can Regulators Fix This Gap?

Experts propose a multi-layered approach to integrating environmental accountability into AI regulation. The foundation must be transparency. AI developers should be required to disclose detailed information about how much energy and water their systems consume, how much carbon is emitted, which rare earth elements are extracted, and how much plastic is used during production. This data would establish a baseline from which regulators can set meaningful targets and limits .

Once baseline data exists, regulators can implement practical measures to reduce environmental harm:

  • Energy Efficiency Standards: Set mandatory targets for energy consumption per model training cycle, with companies required to train AI models on less carbon-intensive energy grids or in water-efficient data centers.
  • Consumer Warnings and Labels: Require warnings that tell users how much carbon dioxide or water is consumed for each AI query, similar to how nutrition labels work on food products.
  • AI Energy Star Rating System: Implement a labeling system that mirrors the EU's existing energy efficiency labels for appliances, ranking AI systems from most efficient (dark green) to least efficient (red), helping consumers make informed choices.
  • Social and Environmental Certification: Create certification programs that verify AI systems meet sustainability standards, allowing organizations to demonstrate environmental responsibility.
  • Tax and Funding Incentives: Offer tax breaks and grants to technology companies that invest in more sustainable AI development practices.

These measures would give regulators and consumers the tools to hold AI companies accountable for environmental impacts while encouraging innovation in sustainable AI development .

"By integrating sustainability into AI laws, through these types of measures, the planet can be somewhat safeguarded alongside AI's rapid expansion," explained Louise Du Toit, a lecturer in law at Southampton Law School.

Louise Du Toit, Lecturer in Law, Southampton Law School, University of Southampton

Why This Matters Now

The urgency of addressing AI's environmental impact cannot be overstated. As AI models continue to grow in size and capability, and as AI adoption accelerates across sectors from healthcare to finance to manufacturing, the cumulative environmental damage will compound. Without regulatory intervention, the AI industry's carbon footprint and resource consumption will become a major contributor to climate change and environmental degradation. The current regulatory vacuum means that environmental considerations are entirely absent from the decision-making process when companies develop and deploy new AI systems .

The challenge is that environmental sustainability requires a different regulatory approach than privacy or bias. It demands transparency about resource consumption, mandatory efficiency standards, and long-term accountability for the full lifecycle of AI systems. Most current AI regulations treat environmental issues as secondary concerns, if they address them at all. Changing this will require policymakers to recognize that AI's environmental footprint is not a peripheral issue but a central component of responsible AI governance.