Europe faces an escalating disinformation crisis where AI-generated deepfakes and fake news spread faster than human fact-checkers can verify them. In response, EU-funded researchers are developing AI detection systems to help journalists and fact-checkers identify manipulated content in real time. According to a recent European Commission survey, nearly two-thirds of Europeans encountered disinformation or fake news within the previous week, making the need for rapid verification tools urgent. Why Is AI-Generated Disinformation So Hard to Spot? The problem has become exponentially worse as generative AI tools have improved. When the AI4Media project began in 2020, tools like ChatGPT were still in their infancy. Today, anyone with access to generative AI can create fabricated images, cloned voices, or realistic-looking news articles that fool both humans and algorithms. Social media platforms amplify this content at scale, making it nearly impossible for traditional fact-checking to keep pace. Last winter, this challenge played out across European social media. Posts claimed that radical Islamists were "invading" Christmas markets, accompanied by videos that appeared to show disruptions at Brussels Christmas markets and photos of heavy security. The reality was starkly different: the videos came from peaceful demonstrations, and the photo had been generated entirely using AI. What looked convincing at first glance was misleading or completely fabricated. "When a fake story is supported by realistic images, it becomes much easier to believe and more tempting to share because the content generates higher views," said Yiannis Kompatsiaris, research director at the Centre for Research and Technology Hellas. Yiannis Kompatsiaris, Research Director at Centre for Research and Technology Hellas How Are Researchers Building Detection Systems? In 2020, a multinational team of researchers, journalists, and technology companies launched the AI4Media initiative with EU funding to create AI tools specifically designed for newsroom workflows. Rather than replacing human judgment, these systems act as a first line of defense, flagging potentially manipulated content for journalists to review quickly. Media organizations including Deutsche Welle in Germany and VRT in Belgium tested these verification tools in real-world settings. The approach combines automated detection with human expertise: AI systems scan content at scale, but professional fact-checkers make the final determination about whether material is genuine or manipulated. A parallel EU-funded project called AI4Trust takes a broader approach. Rather than just detecting individual pieces of fake content, AI4Trust analyzes how disinformation spreads across networks. The system tracks multiple social media and news sites in near real time, using advanced AI algorithms to process text, audio, and images in multiple languages. Because the volume of online material far exceeds human capacity, the system filters and flags posts with high risk of being fake, which professional fact-checkers then review. Their verified assessments feed back into the system to improve its accuracy over time. Steps to Strengthen Disinformation Detection in Your Organization - Integrate AI verification tools into workflows: Deploy detection systems like those developed by AI4Media directly into newsroom processes so journalists can flag suspicious content before publication without slowing down reporting cycles. - Combine automated detection with human review: Use AI to filter and prioritize content, but always have trained fact-checkers make final determinations about authenticity, as AI systems alone cannot replace human judgment. - Continuously retrain detection models: Because generative AI improves rapidly, detection systems must be updated regularly with new examples of AI-generated content to stay effective against the latest techniques. What Makes This an "Arms Race"? The challenge facing researchers is that generative AI models evolve faster than detection systems can adapt. When AI4Media began, the quality of AI-generated content was relatively crude. Today, the realism has advanced dramatically, forcing researchers into a constant cycle of updating their models. "We are in a continuous loop of trying to be able to understand and catch up with the latest technology. The technology has progressed so fast that it's difficult even for us as researchers to keep up. We had to continuously update our models to detect newly generated images," explained Akis Papadopoulos, a researcher at CERTH who worked on the project. Akis Papadopoulos, Researcher at Centre for Research and Technology Hellas The team automated parts of the verification process and regularly retrained their systems, but staying ahead demands continued investment in both research and the media sector. According to the European Digital Media Observatory, an independent EU-funded hub that monitors disinformation campaigns, AI-generated disinformation has increased steadily in recent months, extending well beyond isolated hoaxes to coordinated campaigns that can influence elections, distort public debate, and undermine trust in institutions. How Is Europe's Regulatory Framework Supporting These Efforts? Technology alone cannot solve the disinformation problem. Europe is layering regulatory measures on top of detection tools to create a comprehensive defense. The EU's Digital Services Act requires very large online platforms to assess and mitigate systemic risks, including the spread of disinformation, and increase transparency about how their systems operate. The Artificial Intelligence Act introduces transparency obligations for certain generative AI systems, including requirements to label AI-generated content. A draft Code of Practice on transparency for AI-generated content aims to encourage clearer disclosure and watermarking standards. The European Commission published the second draft of this code in March 2026, simplifying it to provide more flexibility for signatories and reduce compliance burden. The European Media Freedom Act adds another layer of protection by setting out safeguards to ensure that professional media content is recognized and protected on major online platforms. Large platforms must notify recognized media outlets before removing journalistic content and explain their reasoning, giving organizations time to respond. This prevents legitimate reporting from being taken down without justification. "We need tools, but we also need policies and rules. There is no single solution. We need a combination of AI tools, transparency, regulation and awareness if we want to be more effective against disinformation," stated Yiannis Kompatsiaris. Yiannis Kompatsiaris, Research Director at Centre for Research and Technology Hellas The European Parliament's Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties adopted a joint position on the AI omnibus in March 2026 with large majority support. The compromise text notably includes a new prohibited AI practice: banning AI systems that generate deepfake non-consensual intimate imagery in the absence of effective safety measures by providers and deployers. Together, these regulatory measures and detection systems form a wider shield: technology to detect manipulation, regulation to improve transparency and accountability, and safeguards to protect responsible journalism. Yet experts emphasize that public awareness remains just as vital as any technological or regulatory solution.