Charities hoping to save time and money by using artificial intelligence to create campaign images are discovering an uncomfortable truth: the technology is backfiring in ways that threaten their core mission of building trust with donors. A new study from the University of East Anglia analyzed 171 AI-generated images from 17 major organizations including Amnesty International, Plan International, the World Health Organization (WHO), and WWF, revealing that when AI images appear in campaigns, the humanitarian cause effectively disappears from public conversation. Why Are Charities Turning to AI in the First Place? As humanitarian budgets tighten and production pressures increase, many charities and nongovernmental organizations (NGOs) are tempted by AI's promise of speed, cost efficiency, and creative flexibility. The technology offers a cheaper, faster way to produce campaign visuals without the expense of hiring photographers or traveling to affected regions. For some organizations, AI-generated imagery also serves a protective purposeâit can reduce the number of vulnerable people who would otherwise be re-traumatized by being photographed or filmed for campaign purposes. However, the research suggests this "high-tech shortcut" to empathy is fundamentally misguided. "Charities exist because people care about other people," explains David Girling, co-author of the study from UEA's School of Global Development. "The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk." What Did the Research Actually Find? The study, titled "Artificial Authenticity: The Rise of Images Generated by Artificial Intelligence in Charity and Development Communications," uncovered several troubling patterns in how the public responds to AI-generated charity images. - Image Composition: Nearly 70% of the AI images analyzed were designed to appear photorealistic, with poverty being the dominant theme, accounting for roughly one-third of the images (51 of 171), often featuring children, followed by environment (35) and human rights (32) themed images. - Transparency Doesn't Help: While 85% of images were appropriately captioned as AI-generated, this disclosure did not protect the cause and organizations from backlash, even when transparently labeled. - Public Engagement Shifts: When AI images were used without disclosure, audiences adopted an investigative tone, focusing entirely on whether the images were artificial rather than evaluating the charity's actual work. - Comment Analysis: Of the comments analyzed, 141 focused on AI ethics and authenticity concerns rather than the charitable cause, 122 critiqued technical execution and visual quality, and only 80 (less than 20%) actually engaged with the humanitarian issue itself. The Environmental Irony That's Damaging Green Organizations One of the most striking findings involves environmental organizations facing criticism for using energy-intensive AI tools to promote sustainability. WWF Denmark, for example, faced significant public backlash for this contradiction, with climate-conscious donors labeling the move "ecocidal." This message-medium misalignmentâwhere the method of communication contradicts the message itselfârepresents a particularly damaging form of reputational harm. The irony is not lost on an increasingly media-literate public. When organizations claiming to care about environmental sustainability use computationally expensive AI systems to create their campaign materials, it sends a mixed signal that undermines their credibility on climate issues. How Can Charities Use AI More Responsibly? The research team offers practical recommendations for organizations considering AI-generated imagery in their communications strategies: - Develop Sector-Specific Tools: Work with technology providers and AI companies to develop charity-sector-specific AI tools with built-in bias detection, stereotype alerts, and ethical guardrails tailored to humanitarian representation. - Co-Create With Communities: If choosing to use AI-generated imagery, involve local communities in the creative process, including generating AI prompts and approving final imagery to ensure accuracy and cultural appropriateness. - Invest in Ethical Training: Provide proper training in ethical prompt engineering for communications teams to avoid reputational harm and unintended bias in AI-generated content. What Does This Mean for the Future of Charity Communications? The research reveals that the public response to AI in charity work is far from simple. In some cases, people welcomed AI as a way to protect vulnerable individuals from exploitation and preserve their dignity. In others, they criticized it as a distraction from real solutions, particularly in emotionally sensitive campaigns such as cancer or famine relief. Deborah Adesina, co-author of the study and now a media, communications and development consultant, offers a sobering perspective on what lies ahead: "Ultimately, the future of charity storytelling will not hinge on technological capability alone. It will depend on whether organizations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly skeptical." The takeaway for charities is clear: adopting AI for efficiency's sake without considering the broader implications for trust and authenticity may save money in the short term but risks losing far more in donor confidence and organizational credibility in the long term. As the humanitarian sector continues to grapple with budget constraints, the challenge will be finding solutions that balance technological innovation with the human authenticity that drives charitable giving.