The biggest obstacle to regulating AI in journalism isn't disagreement over whether it should be regulated,it's that nobody can agree on what AI actually is. Policymakers, publishers, and technologists are working from fundamentally different definitions of artificial intelligence, creating a regulatory vacuum that's making it nearly impossible to craft clear, enforceable rules for how newsrooms can use these tools. Why Can't Anyone Define AI? The problem sounds simple but has massive consequences. When regulators talk about "AI," some mean only generative systems like ChatGPT and DALL-E, while others include traditional algorithms that have been used in newsrooms for years. This ambiguity creates what experts call "misalignment in policy conversations," where policymakers unknowingly work from different but unarticulated definitions. The scope question matters enormously. Should regulation cover classical algorithms used for decades, or only newer generative AI systems? Should it apply to AI that summarizes articles, moderates comments, or covers local events? Without clear boundaries, policymakers struggle to determine which specific news practices would fall within legal parameters and how those practices differ from other AI uses. What Are Newsrooms Actually Using AI For? Newsrooms are already experimenting with generative AI across multiple functions. These tools offer real productivity gains and innovation opportunities, but they also introduce significant risks. The practical applications include: - Content Summarization: Automatically generating summaries of longer articles or newsletters to help readers quickly grasp key information. - Local Event Coverage: Using AI to assist with reporting on local events, though results have been mixed in terms of accuracy and journalistic quality. - Comment Moderation: Deploying AI systems to filter and moderate user comments on articles and social media platforms. - Story Discovery: Leveraging AI to identify potential stories and trends from large datasets of information. - Productivity Enhancement: Using generative AI tools to streamline routine tasks and free up journalists for more complex reporting work. The challenge is that while these applications offer clear benefits, they also create risks including inaccuracies, ethical dilemmas, copyright violations, and erosion of public trust. How Is the Definitional Problem Breaking Policy? The lack of agreement on what constitutes AI directly undermines policymaking efforts. Some countries are advancing legislation, such as the European Union's AI Act, while others are developing guidelines or lagging behind entirely. However, all of these efforts face the same fundamental challenge: determining which AI news practices would fall within legal parameters. The proprietary and fluid nature of AI systems introduces additional complications. These technologies evolve faster than policy can reasonably keep up with, forcing policymakers to make decisions about future risks they cannot fully predict. Meanwhile, the quantity and type of data collected by generative AI programs raises new privacy and copyright concerns that existing regulatory frameworks were never designed to address. What Specific Regulatory Challenges Are Emerging? Beyond definitional confusion, several concrete regulatory problems are emerging. News publishers are claiming copyright and terms-of-service violations by AI companies that use news content to train their models without authorization. In response, publishers have pursued both litigation and licensing agreements, seeking either penalties or compensation for the use of their copyrighted work. There's also uncertainty about how regulation should handle the distinction between how publishers use AI in news production versus how AI systems draw from news content. These are fundamentally different problems requiring different solutions, yet current policy proposals often treat them as a single issue. How Can Newsrooms Navigate This Uncertainty? While policymakers struggle to define AI, newsrooms cannot wait for perfect regulation. Establishing transparency and disclosure standards requires a coordinated approach between legal requirements and organizational policies. Some areas of transparency may need to be addressed through legal requirements, similar to current advertising disclosures, while others are more appropriately handled organizationally. - Internal Guidelines: Develop clear ethical guidelines for AI use within your newsroom, including policies on when and how AI tools can be deployed in news production. - Content Labeling Standards: Establish consistent practices for labeling AI-generated or AI-assisted content so readers understand how each piece was created. - Disclosure Policies: Create transparent disclosure standards that inform both journalists and the public about how AI is being used in news production and distribution. - Staff Education: Invest in training for both journalists and management to ensure everyone understands AI capabilities, limitations, and ethical implications. - Public Communication: Go beyond simple transparency by providing context about journalistic practices alongside AI disclosures, since the public often lacks nuanced understanding of how news is made. Tech companies and publishers will ultimately be responsible for establishing their own principles and policies for navigating AI use within their organizations, ranging from appropriate applications to labeling practices to image manipulation standards. These organizational standards will need to fit alongside legal requirements and be similar enough across the industry to create meaningful consistency. What Does the Research Show About Public Understanding? Empirical research reveals significant gaps in how the public understands AI in journalism. Many news companies are already adopting AI, yet there are substantial gaps in understanding among journalists themselves, and many members of the public struggle to distinguish AI-generated content from human work. Interestingly, research shows that public trust tends to hinge on perceived story quality and the credibility of the outlet more than whether AI was used in production. This suggests that transparency about AI use alone is insufficient; newsrooms must also provide broader context about journalistic practices and editorial standards. What Comes Next for AI Regulation in News? Forward-thinking collaboration among policymakers, publishers, technology developers, and other stakeholders is critical to strike the right balance and support public access to information. The definitional problem won't solve itself, and waiting for perfect regulatory clarity could leave newsrooms vulnerable to both legal challenges and public backlash. The stakes are high. Without clear policy guidance, technology companies' own decisions will continue to dictate how AI is developed, implemented, and used in news. Newsrooms that proactively establish transparent, ethical guidelines and invest in staff education will be better positioned to navigate whatever regulatory framework eventually emerges, while those that wait for regulation to catch up may find themselves playing catch-up on both legal and ethical fronts.