Europe's new AI Act is running into a real-world problem: medical AI systems that combine genetic data with artificial intelligence don't fit neatly into existing regulatory boxes. As precision medicine tools powered by AI become more common in European hospitals, regulators are discovering that the EU's landmark AI regulation, which entered force on August 1, 2024, wasn't designed with these complex medical applications in mind. The convergence of multi-omics technology (which analyzes multiple layers of biological data like genes, proteins, and metabolites) with AI is transforming how doctors diagnose cancer, predict drug responses, and tailor treatments to individual patients. Yet the regulatory framework lags behind this innovation. Researchers at institutions including the Else Kröner Fresenius Center for Digital Health at TUD Dresden University of Technology have identified critical gaps: data integrity standards, algorithm transparency requirements, validation methods, and how to integrate real-world evidence all remain unclear under current EU rules. What's the Regulatory Gap in Medical AI? The core problem is that precision medicine AI tools fall between two regulatory worlds. Device regulations and pharmaceutical regulations were written separately, and neither fully addresses AI-enabled omics systems. Some tools are approved as medical devices in Europe, like Owkin's AI diagnostic solutions for breast and colorectal cancer, while others navigate pharmaceutical pathways. This creates confusion about which rules apply and when. The EU AI Act itself focuses on high-risk AI systems and transparency, but it wasn't written specifically for medical applications. Healthcare providers and companies developing these tools face overlapping, sometimes contradictory requirements. The European Data Protection Supervisor (EDPS) now acts as a market surveillance authority for AI systems used by EU institutions, but the broader healthcare sector still lacks clear guidance on how to comply with both the AI Act and existing medical device rules simultaneously. How Are European Regulators Adapting to Medical AI? The EU is taking steps to address these gaps, though progress is uneven. The European Commission has launched regulatory sandboxes under Article 57 of the AI Act, which allow companies to test AI systems in controlled environments before full market approval. Additionally, the EDPS has established an AI Unit and developed an AI Preparedness Strategy to supervise AI systems across EU institutions and ensure they meet safety and human-rights standards. In March 2026, the European Data Protection Board and EDPS jointly issued an opinion on the European Commission's proposed European Biotech Act, calling for specific safeguards for sensitive health data while supporting harmonization of clinical trials. This signals that regulators recognize the need for sector-specific rules tailored to biotech and precision medicine. Meanwhile, the European Parliament is refining the AI Act itself through an "AI Omnibus" amendment process. In preliminary agreements reached in March 2026, MEPs extended compliance deadlines for high-risk AI systems, pushing requirements for systems listed in Annex III to December 2, 2027, and those in Annex I to August 2, 2028. The goal is to give companies, technical standards bodies, and national authorities more time to prepare. Key Regulatory Challenges for Precision Medicine AI - Data Integrity Standards: Precision medicine AI relies on genetic and molecular data that must meet strict quality standards, but the AI Act doesn't specify how to validate data integrity for omics applications. - Algorithm Transparency Requirements: Deep learning models used in multi-omics analysis are often "black boxes," making it difficult to explain their decisions to doctors and regulators, yet the AI Act requires explainability for high-risk systems. - Real-World Evidence Integration: Medical AI systems must prove they work in actual clinical settings, not just in laboratory studies, but EU rules don't clearly define how to collect and validate this evidence over time. - Overlapping Regulatory Frameworks: Companies must navigate device regulations, pharmaceutical rules, data protection laws, and the AI Act simultaneously, creating compliance burden and uncertainty. - Validation and Continuous Monitoring: Precision medicine tools need adaptive validation strategies that allow updates as new data emerges, but current regulations treat software as static products. The challenge is particularly acute because precision medicine AI systems are not one-size-fits-all. A tool that detects cancer from histology slides, like Owkin's approved solutions, requires different validation than a system predicting drug responses based on a patient's genetic profile. Yet the EU Act applies broad rules that don't account for these differences. What Do Companies and Regulators Say About the Path Forward? Industry and regulatory experts agree that new thinking is needed. Researchers have called for regulatory frameworks that balance innovation with safety, allowing companies to update AI systems based on real-world performance without triggering a full re-approval process. The concept of "safe moving targets" has emerged, where regulators accept that medical AI will evolve continuously, provided companies implement robust monitoring and governance. The EU's approach through regulatory sandboxes and extended compliance deadlines reflects this philosophy. By giving companies time to develop technical standards and guidance, the EU hopes to avoid a situation where overly rigid rules stifle innovation or force companies to choose between compliance and clinical effectiveness. However, some industry groups argue the EU Omnibus amendments don't go far enough. Forty-eight EU-based trade associations wrote to the European Parliament and Council in early 2026, noting that many companies are already regulated under robust sectoral frameworks but are now caught in multiple layers of regulation. They called for exemptions for organizations covered by existing sector-specific AI rules, arguing that "double or even triple layer of regulation" creates unnecessary burden. The bottom line: Europe's AI Act is a groundbreaking step toward trustworthy AI, but precision medicine is exposing its limits. As more AI-enabled diagnostic and treatment tools reach European patients, regulators and industry must work together to clarify how the AI Act applies to medical applications, how it interacts with existing device and pharmaceutical rules, and how to allow these life-saving tools to evolve safely in real-world clinical settings.