Europe's Medical AI Gamble: Why Regulators Are Betting on Existing Device Rules Instead of New Laws
The European Parliament has chosen a middle path in regulating artificial intelligence used in medical devices: keep AI safety requirements intact, but assess them through existing medical device frameworks rather than creating separate AI-specific rules. This decision, made in April 2026, sidesteps a potential regulatory cliff where high-risk AI obligations could apply before the standards needed to implement them are ready .
Why Medical Device Regulators, Not AI Specialists, Will Oversee Clinical AI?
The Parliament's negotiating position on the Digital Omnibus, a sweeping EU regulatory package, directs medical AI systems to be evaluated through the Medical Devices Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR) conformity assessment process. This means the specialized notified bodies already responsible for approving medical devices will also assess whether an AI system meets transparency, robustness, data governance, and human oversight requirements .
The logic is straightforward: a company developing AI-assisted pathology software for cancer detection must already demonstrate clinical validity, post-market surveillance, and lifecycle safety management under the MDR. Adding a separate conformity pathway for the AI component would duplicate that process without adding meaningful patient safety checks. Similarly, manufacturers of AI-enabled continuous glucose monitoring systems follow the same principle. The risk to the patient is inseparable from the device itself .
"Keeping the AI Act and the MDR/IVDR as two overlapping sets of rules would fragment the framework, increase administrative complexity, and make it harder to assess AI risks in their full clinical context," explained Alexander Olbrechts, Director of Digital Health at MedTech Europe.
Alexander Olbrechts, Director of Digital Health, MedTech Europe
This architectural choice reflects a principle that AI-enabled medical technologies would remain fully subject to relevant AI Act requirements, including risk management, data governance, transparency, and human oversight. The difference is that these requirements would be assessed through the MDR/IVDR system already designed to evaluate them in their full clinical context, rather than through a separate AI-specific pathway .
What Could Go Wrong During the Transition Period?
The Parliament's decision creates a legitimate concern: embedding AI safeguards through future delegated acts and standards creates a period of legal uncertainty. AI-enabled devices are in clinical use today, but patients cannot wait for a multi-year standard-setting process to conclude before the rules are clear. The clock is particularly tight. If the Digital Omnibus is not adopted before August 2, 2026, the original high-risk AI obligations apply as currently written, before the standards and implementing measures needed to make them workable are in place .
This transition risk is real and requires urgent action. The Commission now has both the mandate and the opportunity to move quickly by laying down specific AI requirements in Annex I of the MDR/IVDR with clear application dates. Speed and specificity together are what transform a sound architectural decision into workable rules on the ground .
Notified bodies, too, must be equipped for this role. Assessing AI-enabled devices requires specialist competencies not uniformly present across the network today. Investment in notified body capacity and AI-specific designation criteria are essential conditions for effective implementation .
Steps to Prepare for Medical AI Regulation Under the New Framework
- Engage with Notified Bodies Early: Manufacturers should identify and begin dialogue with notified bodies that have demonstrated competency in AI assessment, rather than waiting for formal designation criteria to be finalized.
- Document Clinical Context Thoroughly: Prepare comprehensive documentation of how AI components integrate with device functionality and patient safety, since the MDR/IVDR framework evaluates AI risks within their full clinical context.
- Monitor Delegated Acts and Standards Development: Track the Commission's progress on laying down specific AI requirements in Annex I, as these will define the exact conformity assessment pathway and timelines for your product category.
- Build Internal AI Governance Capacity: Establish teams capable of demonstrating compliance with transparency, data governance, risk management, and human oversight requirements before regulatory guidance is finalized.
What's Actually at Stake for Patients and Innovation?
The pipeline of AI-enabled medical technologies is substantial. Earlier cancer detection, more accurate glucose monitoring, and personalized surgical planning represent real opportunities to improve outcomes, reduce costs, and extend healthcare's reach to underserved populations. None of it is realized in a regulatory vacuum .
The Parliament's negotiating position creates the conditions for three critical outcomes. Manufacturers need legal certainty and an efficient system. Notified bodies need clear criteria and adequate resources. Patients need assurance that the clinical AI tools used on them today meet a robust set of requirements assessed across their lifecycle .
The stakes extend beyond Europe. Poland, for example, has adopted its own draft AI law aligned with the EU's AI Act framework, establishing a new supervisory body called the Commission for the Development and Safety of Artificial Intelligence (KRiBSI). This body will oversee compliance, conduct administrative proceedings, and manage regulatory sandboxes where companies can test AI solutions in controlled environments . Similar regulatory moves across member states underscore the urgency of clarity at the EU level.
The decision to embed AI safeguards within existing medical device frameworks rather than create parallel rules represents a pragmatic approach to a genuine dilemma: how to ensure robust safety oversight without creating administrative paralysis. Whether this architecture succeeds depends entirely on speed and specificity in the months ahead. Manufacturers, notified bodies, and regulators all need clarity before August 2026. The window for delivering it is narrow.