Why AI Laws Are Missing the Environmental Crisis Hiding in Plain Sight

More than 200 AI laws have been enacted across over 100 countries, yet most overlook a critical problem: the environmental toll of artificial intelligence itself. While regulators focus on privacy, bias, and security, they're largely ignoring the massive greenhouse gas emissions, water depletion, and pollution created by training and running AI systems. A new analysis of regulatory frameworks in the European Union and United Kingdom reveals a troubling gap that could undermine climate goals as AI expands globally .

What Environmental Damage Is AI Actually Causing?

The environmental impact of AI spans the entire lifecycle of the technology, from the moment rare earth elements are extracted from the ground to the final disposal of hardware. Training large language models (LLMs), which are AI systems designed to understand and generate human language, requires enormous amounts of energy and water. A 2025 research estimate found that training GPT-3, OpenAI's influential language model released in 2020, consumed approximately 700,000 liters of freshwater just for electricity generation and data center cooling .

The problem extends beyond training. Once AI models are deployed and in use, the energy consumed to generate text, images, and other outputs vastly exceeds what was used during the initial training phase. Even as individual AI models become more energy-efficient, the overall energy consumption keeps rising because AI is proliferating across industries and applications .

Manufacturing the specialized computer chips called graphics processing units (GPUs) that power AI systems creates additional environmental damage. The extraction of rare earth elements needed for these chips contaminates soil and water, pollutes the air, and destroys forest habitats. The industry also generates significant electronic waste as hardware becomes obsolete .

Why Are Regulators Ignoring This Problem?

The EU's AI Act, which became the world's first comprehensive AI law when it took effect on August 1, 2024, acknowledges some environmental consequences and requires that AI systems be developed and used in a sustainable manner. However, the law's teeth are limited. It requires AI providers to disclose energy consumption data only when requested by the AI Office, rather than making such transparency mandatory. Codes of conduct to assess and minimize environmental impact are voluntary, not compulsory .

The United Kingdom has taken an even weaker approach. The UK government's 2023 white paper on AI regulation explicitly excludes sustainability from its scope, stating that environmental issues are "outside of the scope of our proposals for a new overarching framework for AI regulation," even though the paper acknowledges that AI can contribute to climate solutions .

"The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to sustainability," the UK government stated in its white paper.

UK Government, 2023 AI Regulation White Paper

The core issue is that most AI laws are intentionally "human-centric," focusing on how AI affects people rather than the planet. This narrow framing leaves environmental consequences unaddressed as the technology scales rapidly .

How to Make AI Regulation Environmentally Responsible

  • Mandatory Transparency Requirements: AI developers should be required to disclose detailed information about energy consumption, water usage, carbon emissions, rare earth elements extracted, and plastic used during the entire AI production process. This creates a baseline for measuring progress and holding companies accountable.
  • Energy Efficiency Labeling Systems: An AI "energy star" rating system could mirror the EU's existing energy efficiency labels for appliances, ranking AI systems from most efficient (dark green) to least efficient (red). Consumers could see warnings about carbon dioxide emissions or water consumption for each query they make.
  • Enforceable Targets and Limits: Once transparency data is available, regulators can set specific targets for energy efficiency, carbon emissions, and water use. Practical solutions include training AI models on less carbon-intensive energy grids or in less water-intensive data centers.
  • Financial Incentives: Tax incentives and funding programs could encourage technology companies to make more sustainable choices, making green AI development economically competitive with conventional approaches.
  • Social and Environmental Certification: A certification system could help consumers make informed choices about which AI systems to use or whether AI should be deployed for a particular task at all.

The challenge is that measuring AI's environmental effects accurately remains difficult, partly because technology companies lack transparency about their operations. Without clear data, regulators struggle to set meaningful standards .

"More transparency starts with AI developers having to disclose information about how much energy and water is consumed, how much carbon is emitted, the rare earth elements extracted and how much plastic is used during the AI production process," explained Louise Du Toit, a lecturer in law at Southampton Law School.

Louise Du Toit, Lecturer in Law, Southampton Law School, University of Southampton

As AI continues its rapid expansion across healthcare, finance, transportation, and countless other sectors, the environmental cost of powering this technology will only grow. Without integrating sustainability into AI laws through mandatory transparency, enforceable limits, and consumer-facing labeling, the planet faces mounting pressure from an industry that was supposed to help solve climate change, not accelerate it .