The Missing Piece in AI Ethics: Why the Planet Matters as Much as Fairness

The global conversation about responsible artificial intelligence has centered on human concerns like privacy, bias, and fairness, but a crucial dimension is being overlooked: whether AI systems can operate within planetary boundaries. As AI adoption accelerates across industries, researchers and ethicists are raising an uncomfortable question: can AI truly be trustworthy if it's destroying the environment that sustains us?

What Makes AI an Environmental Problem?

Artificial intelligence is often described as weightless and immaterial, existing somewhere "in the cloud." The reality is far different. AI runs on physical infrastructure that consumes enormous amounts of electricity, water, and rare minerals extracted from vulnerable ecosystems. Training a single large language model can consume as much electricity as hundreds of households use in a year, according to research on AI's resource intensity.

But training is only the beginning. The real environmental footprint comes from inference, the billions of daily prompts, searches, translations, and predictions that occur after a model is deployed. Data centers already account for a significant share of global electricity demand, and this share is expected to rise sharply toward 2030 as AI becomes embedded across all sectors of society.

Water consumption presents another critical challenge. Cooling data centers requires enormous amounts of water, creating new pressure on ecosystems and communities already experiencing drought and water stress. Before an AI system answers a single question, it may have already accumulated a significant ecological debt through the extraction of critical minerals needed for hardware.

How Is Environmental Sustainability Being Integrated Into AI Ethics Frameworks?

A systematic review of 36 peer-reviewed articles published between 2016 and 2025 examined how ethical norms are being applied to AI-driven innovation across different sectors and regions. The research identified five main ethical challenges: algorithmic bias, transparency, data protection, responsibility, and sustainability. Notably, sustainability emerged as a growing but still underdeveloped priority, with only modest integration between ethical AI principles and Environmental, Social, and Governance (ESG) models.

The review revealed notable differences in ethical priorities between Western and non-Western perspectives, particularly within healthcare, education, and human resources sectors. While ESG frameworks offer a promising structure for embedding ethical standards into innovation ecosystems, their practical implementation remains inconsistent across organizations.

Universities and research institutions are beginning to address this gap. Santa Clara University recently established the Cunningham Shoquist Center for Applied AI and Human Potential, funded by a landmark gift from NVIDIA executive Debora Shoquist. The center explicitly commits to advancing applied AI research while maintaining a focus on safety, sustainability, and quality of life.

"I believe Santa Clara will always lead with its values, employing an ethical and humanistic lens as it taps into AI's power to unlock human potential across thousands of applications. The AI Center will serve the University for years to come, helping ensure AI technologies serve as a catalyst for human dignity and advancement of the common good," stated Debora Shoquist, executive vice president at NVIDIA.

Debora Shoquist, Executive Vice President at NVIDIA

Can AI Actually Accelerate or Delay the Green Transition?

One of the most uncomfortable questions facing the AI industry is whether artificial intelligence could delay the green transition rather than accelerate it. Major oil and gas companies are among the most sophisticated adopters of AI, using machine learning to analyze seismic data, identify new reserves, and optimize extraction from mature wells. By lowering costs and improving efficiency, AI can make fossil fuels more competitive compared to renewables.

Simultaneously, the explosive growth of data centers requires stable, 24/7 electricity supply. Without sufficient production and storage for renewable energy, this demand risks prolonging reliance on natural gas or even coal to ensure necessary baseload power. The choice between using AI to squeeze more carbon from the earth or to help phase out fossil fuels faster is fundamentally political and ethical.

Steps to Redefine What "Trustworthy AI" Actually Means

  • Embed Environmental Metrics into Accountability Frameworks: Environmental impact must become a formal part of AI accountability, similar to how bias audits and transparency requirements are now standard. Environmental, Social, and Governance reporting should include mandatory disclosure of energy use, water consumption, and material footprint for AI systems.
  • Shift from "Bigger Is Better" to "Frugal AI": The dominant trend in AI development has been larger models with more data and more computation. Moving toward smaller, specialized models and edge intelligence (running AI on local devices rather than centralized data centers) can dramatically reduce energy consumption and latency.
  • Redesign Data Center Infrastructure: Data centers must be reimagined as integrated components of local energy ecosystems rather than isolated energy sinks. Waste heat can support district heating systems, backup power can rely on green hydrogen instead of diesel generators, and circular economy principles can guide hardware design.
  • Implement Governance Mechanisms Beyond Technology: Technological innovation alone will not automatically make AI sustainable. Standardized metrics, labeling schemes, and key performance indicators can empower users to choose greener AI providers. Progressive taxation of digital resource consumption could discourage wasteful overuse while keeping basic access affordable.
  • Explore Emerging Computing Architectures: Neuromorphic computing (inspired by the human brain), optical computing (using photons rather than electrons), and quantum computing promise significant efficiency gains. While still developing, these technologies signal that AI does not have to remain permanently tied to today's energy-intensive architectures.

What Do Global AI Ethics Programs Say About Environmental Responsibility?

International institutions are beginning to address the environmental dimension of AI ethics. The United Nations Interregional Crime and Justice Research Institute (UNICRI) and LUMSA University in Rome are hosting a Summer School on Artificial Intelligence, Ethics, and Human Rights that explicitly addresses sustainability alongside traditional ethical concerns like bias, discrimination, and fairness.

The curriculum examines how ethical principles and values should guide AI throughout its entire life-cycle, from design and development to deployment and real-world applications. Participants engage with concepts including meaningful human control, trustworthiness, explainability, transparency, non-discrimination, privacy, surveillance, autonomy, accountability, and sustainability.

In December 2024, UN Secretary-General António Guterres addressed the Security Council with a stark warning: "Humanity's fate can't be left to algorithms." He stressed the urgent need for global AI governance, noting that the rapid pace of AI development is outpacing regulatory efforts and increasing risks to global peace and security. His remarks emphasized that safety, equality, accountability, and human oversight must remain central to AI governance.

The convergence of these efforts signals a growing recognition that trustworthy AI must be redefined. It can no longer be measured only by its precision, fairness, and transparency. Instead, trustworthy AI must also safeguard the ecological foundations of life itself. As one researcher noted, when machines think, the planet should not have to pay the price.