Newsrooms are adopting generative AI tools rapidly, but simply telling readers "this was AI-generated" isn't enough to maintain trust. The real problem is deeper: most people lack a nuanced understanding of how journalism works in the first place, so transparency about AI use becomes meaningless without broader context about human reporting practices. This gap between what newsrooms are doing and what the public understands represents a fundamental challenge that goes beyond technical disclosure. What's Actually Happening in Newsrooms Right Now? Generative AI tools like ChatGPT, DALL-E, and Gemini are already embedded in newsroom workflows. These systems help with concrete tasks: summarizing articles, generating newsletters, assisting with local event coverage, moderating comments sections, and even finding story leads. The productivity gains are real and appealing to resource-strapped newsrooms. But here's the tension: while these tools offer innovation and efficiency, they also introduce significant risks. Newsrooms are grappling with inaccuracies in AI-generated content, copyright violations when AI systems train on news articles without permission, and the erosion of public trust. Some news publishers have even filed lawsuits against AI companies for using their copyrighted content to train models without authorization or compensation. Empirical research reveals a troubling gap. Many news organizations are already using AI, yet journalists themselves often lack deep understanding of how these systems work. Meanwhile, the public struggles to distinguish AI-generated content from human-written work. Surprisingly, studies show that trust tends to hinge on perceived story quality and the credibility of the outlet itself, not whether AI was involved. Why Transparency Alone Is Failing? Here's where the story gets counterintuitive. You might assume that if newsrooms simply labeled AI-generated content clearly, readers would understand and trust the work. But that's not how it works in practice. The Center for News, Technology and Innovation found that "transparency alone is not enough." The public largely lacks a nuanced understanding of journalistic practices in general, so they need broader context to make sense of AI use. Think about it this way: if someone doesn't understand how a human journalist fact-checks a story or interviews sources, telling them "this summary was AI-generated" provides almost no meaningful information. They can't evaluate whether the AI did a good job because they don't know what a good job looks like in journalism. Transparency initiatives must expand beyond simply disclosing AI use to include education about how human journalists work. How Newsrooms Can Build Real Trust With Readers - Establish Clear Internal Guidelines: News organizations must develop ethical guidelines that specify which tasks are appropriate for AI, how content should be labeled, and when human oversight is required before publication. - Educate Both Journalists and the Public: Newsrooms need to invest in training journalists about AI capabilities and limitations, while also helping readers understand journalistic practices beyond just AI disclosure. - Implement Transparent Disclosure Policies: Create consistent standards for when and how AI use is disclosed, ensuring readers understand not just that AI was involved, but what role it played in the reporting process. - Address Copyright and Data Sourcing: Establish clear policies about which news sources and data are used to train or inform AI systems, respecting intellectual property rights and terms-of-service agreements. - Collaborate on Industry Standards: Work with other publishers and policymakers to develop shared definitions and best practices, rather than each newsroom creating isolated policies. The challenge is that without coordinated governance, individual newsroom decisions will continue to shape how AI is used in journalism. Technology companies' own choices about how they develop and deploy these systems will influence what publishers can do. Forward-thinking collaboration among policymakers, publishers, technology developers, and other stakeholders is critical to strike the right balance and support public access to trustworthy information. The Definitional Problem That's Blocking Real Solutions One major obstacle to fixing this problem is that nobody can agree on what "artificial intelligence" actually means. Is it only generative AI systems like ChatGPT, or does it include traditional algorithms that have been used in newsrooms for years? Should the definition be technical or use plain language? This widespread disagreement creates real problems for policymaking. When policymakers, publishers, and technologists work from different but unarticulated definitions of AI, their policies end up misaligned. A regulation that seems clear to one group might be interpreted completely differently by another. This definitional confusion makes it nearly impossible to scope policy effectively or determine which AI news practices should fall within legal parameters. Regulation is also uneven globally. Some countries like the European Union are advancing comprehensive legislation through the AI Act, while others like Brazil are developing guidelines. Many countries lag behind entirely. Key challenges include determining which AI uses are "high risk," managing copyright and data sourcing issues, and ensuring that regulation doesn't inadvertently hamper innovation or press freedom. What Needs to Happen Next? The path forward requires action on multiple fronts simultaneously. Legislation will need to offer clear and consistent definitions of different AI categories, grapple with the repercussions of AI-generated content for copyright and civil liberties, and establish accountability mechanisms for violations. The failure of Canada's Artificial Intelligence and Data Act (AIDA) to pass suggests that policymakers need to include meaningful public participation in these deliberations, not just industry input. For newsrooms specifically, the work is both technical and cultural. Establishing transparent, ethical guidelines for AI use is necessary but insufficient. Newsrooms must also invest in educating both their journalists and their audiences about how journalism works, what AI can and cannot do, and why certain editorial decisions are made. This broader transparency initiative is more ambitious than simply adding a label to AI-generated content, but it's the only approach that actually builds trust. The stakes are high. As generative AI tools continue to evolve and become more capable, newsrooms face a choice: they can treat AI as just another tool to be deployed for efficiency, or they can use it thoughtfully while strengthening the public's understanding of journalism itself. The second path is harder, but it's the only one that preserves the credibility that news organizations depend on.