Artificial intelligence is becoming one of the most powerful tools for fighting climate change, yet it's also one of the most resource-intensive technologies we've ever built. At Louisiana State University, researchers are using AI to identify tree coverage across entire nations, predict wildfires weeks in advance, and improve weather forecasting down to the neighborhood level. Meanwhile, massive data centers required to power these systems are consuming millions of gallons of water daily and drawing enormous amounts of electricity. The paradox is stark: the same technology that helps us understand and prepare for environmental threats is itself becoming an environmental threat. What Can AI Actually Do for Climate Science? The practical applications of AI in environmental research are already delivering results that would have taken scientists years to achieve manually. Supratik Mukhopadhyay, a professor and researcher at LSU, has led projects that demonstrate AI's potential to accelerate climate science in ways that were previously impossible. One of Mukhopadhyay's most significant projects involved training an AI system to identify and classify landscape features from satellite images. The program learned to distinguish trees, buildings, bodies of water, and roads with near-perfect accuracy, then mapped above-ground biomass and tree coverage across the entire United States. This work became part of DeepSat, now integrated into NASA's Earth Exchange platform. Before this AI system existed, scientists had to manually piece together vast satellite images where individual trees appeared as mere pixels. The AI solution was not only faster but also more accurate and scalable. "This is basically related to how much carbon is getting captured in the ground. It's very important for environmental scientists, because people want to know how much carbon is there," explained Supratik Mukhopadhyay, professor and researcher at LSU. Supratik Mukhopadhyay, Professor and Researcher at Louisiana State University Another breakthrough project, called DeepFire, uses AI to predict wildfires with more than 90% accuracy. The system analyzes surrounding weather data, including vegetation type, wind speeds, and storm conditions, to forecast the likelihood of a wildfire occurring days or even weeks in advance. Once a prediction is made, the AI focuses on sensors and cameras in high-risk areas to detect fires as they begin to spread, then predicts how the fire will move to help first responders coordinate containment efforts. The system can even predict where lightning strikes will hit, one of the most powerful indicators of wildfire risk. How Is AI Transforming Weather Forecasting? Weather forecasting represents another frontier where AI is fundamentally changing how scientists work. Traditional weather prediction relies on physics-based models that simulate atmospheric conditions. A new generation of AI techniques, called Deep Learning Numerical Weather Prediction (DL-NWP), is now complementing these physics models by learning patterns directly from historical weather data and observations. The Weather Company, one of the leading organizations applying AI to meteorology, uses machine learning to optimize hundreds of global and regional weather models simultaneously, continuously learning which models perform best under specific conditions. This approach consistently delivers more accurate forecasts than relying on any single model alone. AI also enables "super-resolution" forecasting, pushing weather prediction down to neighborhood-level detail rather than just regional scales. This means people and businesses can know with greater precision when and where weather impacts will actually occur. The speed improvements are equally significant. Advanced AI-driven radar processing techniques have shrunk forecast update cycles from tens of minutes to just a few seconds. When storms are intensifying and conditions are changing rapidly, this speed difference can be the margin between life and death. Steps to Understanding AI's Role in Climate Science - Recognize AI's dual nature: AI accelerates climate research and environmental monitoring while simultaneously consuming massive amounts of energy and water, creating a complex trade-off that requires careful evaluation. - Understand the specific applications: AI excels at pattern recognition tasks like identifying forest coverage from satellite imagery, predicting natural disasters, and optimizing weather models, but these benefits must be weighed against infrastructure costs. - Evaluate the resource demands: Data centers powering AI systems require millions of gallons of fresh water daily for cooling and consume electricity at scales comparable to small cities, raising questions about sustainability in water-stressed regions. Why Are Data Centers Becoming a Major Environmental Concern? The infrastructure required to run AI systems is becoming a flashpoint in the climate debate. Across Scotland, 17 AI data center proposals are at various stages of planning, with estimates suggesting $3 trillion will be spent on data centers supporting AI between 2025 and 2029. One proposed facility in East Ayrshire, called Rufus, would operate at 540 megawatts of power and require millions of liters of water daily for cooling. Local residents and environmental experts are raising serious concerns about water consumption. Alex De Vries, who runs the Digiconomist blog tracking digital technology's environmental impact, estimates that a 540-megawatt facility could result in nearly 6 billion liters of annual fresh water consumption just to generate the power needed. This is particularly troubling given that the United Nations has warned of global water bankruptcy. The company behind the Rufus proposal, Intelligent Land Investments (ILI), argues that Scotland's cooler climate reduces cooling requirements, potentially allowing the facility to harvest rainwater on-site for all its water needs. However, experts dispute whether climate alone can mitigate the massive water demands of such a facility. What Are the Hidden Risks of AI Systems in Research? Beyond infrastructure concerns, there are fundamental limitations to AI systems that researchers are only beginning to understand. Mukhopadhyay emphasized that generative AI systems, which power many consumer-facing applications, have a critical flaw: they can produce confident-sounding answers to questions they were never trained to answer. This phenomenon, called "hallucination," is particularly dangerous in scientific research. To illustrate the problem, Mukhopadhyay described a hypothetical AI trained only to identify giraffes and horses. When shown a giraffe or horse, the system performs flawlessly. But when shown an elephant, the system doesn't recognize that it has encountered something outside its training data. Instead, it compares the elephant to giraffes and horses and confidently declares it to be one or the other. The system makes this mistake silently, without warning the user that it encountered something unfamiliar. "The problem with that is it does that without giving you a warning, so you don't even know that it received something that it has never seen before and should not have received, like an elephant. And then it made a mistake, and it made that mistake silently," noted Supratik Mukhopadhyay. Supratik Mukhopadhyay, Professor and Researcher at Louisiana State University In research contexts like cancer cell identification, such silent failures could have catastrophic consequences. This is why Mukhopadhyay stressed that AI systems designed for research must be trained to near-perfect accuracy before deployment, and why human expertise remains irreplaceable in scientific work. How Are Communities Responding to AI Data Center Proposals? The tension between AI's potential benefits and its resource demands is playing out in real time in communities where data centers are proposed. In Hurlford, Scotland, residents are asking tough questions about whether the promised economic benefits justify the environmental and infrastructure costs. Some residents welcome the prospect of 120 to 150 new jobs in a region that desperately needs investment. Others worry that the jobs will go to workers from outside the area and that the facility will burden local infrastructure for decades. Cheryl Rowland, an admin assistant at a construction company in Hurlford, questioned whether ILI would invest in education and upskilling programs to help local people grow into technical positions at the facility. She also expressed concern about pressure on the local electricity grid and the lack of a formal environmental impact assessment before planning approval. For Lisa Beacham, a student who researched the water demands of the proposed facility, the core issue is transparency. She noted that ILI has no prior experience building data centers and that the company has not provided concrete details about a structure that would permanently transform the region for 40 to 50 years. What Does This Mean for the Future of AI and Climate? The emerging picture is one of profound complexity. AI is demonstrating genuine power to accelerate climate science, improve disaster prediction, and optimize resource use. At the same time, the infrastructure required to run these systems is creating new environmental pressures in water and energy consumption. Neither the benefits nor the costs can be ignored. Mukhopadhyay concluded his talk at LSU by emphasizing that AI is neither flawless nor unsalvageable. The technology requires careful, informed deployment with full awareness of both its potential and its limitations. As data centers continue to proliferate globally, communities and policymakers will need to make increasingly difficult decisions about whether the climate benefits of AI-powered research justify the environmental costs of the infrastructure that makes that research possible.