Europe's Medical AI Gets a Practical Rulebook: Why Doctors and Device Makers Are Breathing Easier

Europe's approach to regulating artificial intelligence in healthcare is shifting from creating new bureaucratic layers to working within existing systems that already understand patient safety. The European Parliament recently adopted a negotiating position on the Digital Omnibus that fundamentally changes how AI-powered medical devices will be overseen, moving away from separate AI rulebooks toward integration with established medical device regulations. This matters because hospitals are already using AI to read scans, flag health risks, and predict patient deterioration, yet the rules governing these tools have been fragmented and unclear .

The stakes are high and the timeline is tight. If the Digital Omnibus is not finalized before August 2026, the original high-risk AI obligations will kick in before the standards and implementation measures needed to make them workable are actually ready. That creates a dangerous gap where manufacturers and hospitals won't know exactly what rules apply to the AI systems already in clinical use .

What Changed in the Parliament's Position on Medical AI?

Rather than weakening safety requirements, the European Parliament chose to keep all the core protections intact. Human oversight, transparency, robustness, data governance, and detailed logging of AI decisions remain mandatory. The key shift is where these requirements are assessed and enforced .

Instead of creating a separate conformity pathway just for the AI component of medical devices, the Parliament decided to embed AI safeguards directly into the existing Medical Devices Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR) frameworks. These are the systems that already evaluate medical devices for safety and effectiveness. Specialized notified bodies, which are organizations already responsible for certifying medical devices, would handle the AI assessment as part of their normal work .

"There was a real risk that the Digital Omnibus process could strip away AI-specific obligations under the banner of simplification. Simplification is a legitimate and necessary goal, and the ongoing MDR/IVDR revision is the right vehicle for it. The goal is a regulatory system that is both lean and safe. Those two objectives are compatible, provided the right things are simplified," explained Alexander Olbrechts, Director of Digital Health at MedTech Europe.

Alexander Olbrechts, Director of Digital Health, MedTech Europe

Why Does Embedding AI Rules Into Existing Medical Device Frameworks Make Sense?

The logic is straightforward: when a company develops AI-assisted pathology software for cancer detection, the clinical risk comes from the entire system, not just the AI component in isolation. That company must already demonstrate clinical validity, post-market surveillance, and lifecycle safety management under the MDR. Creating a separate conformity pathway specifically for the AI piece would duplicate work without adding any meaningful patient safety benefit .

Consider the practical examples. A manufacturer of AI-enabled continuous glucose monitoring systems must ensure the device works safely and accurately in real patients. The risk to the patient is inseparable from the device itself. From a patient safety perspective, it makes sense to manage AI medical technology risks within the device framework rather than splitting oversight between two different regulatory systems .

Moving medical devices from Annex I Section A to Section B of the AI Act does not mean reducing oversight, control, or safety verifications. Instead, it ensures these requirements are applied coherently within the MDR/IVDR framework. AI-enabled medical technologies remain fully subject to relevant AI Act requirements, including risk management, data governance, transparency, and human oversight. The difference is that these requirements would be assessed through the IVDR/MDR system already designed to evaluate them in their full clinical context .

How to Implement AI Medical Device Rules Effectively

  • Speed and Specificity: The European Commission must lay down specific AI requirements in Annex I with clear application dates to give manufacturers and notified bodies the certainty they need from the outset, rather than waiting years for standards to be developed.
  • Notified Body Capacity: Assessing AI-enabled devices requires specialist competencies not uniformly present across the notified body network today, so targeted investment in notified body capacity and AI-specific designation criteria is essential for effective implementation.
  • Clear Transition Rules: Embedding AI safeguards via future delegated acts and standards creates a period of legal uncertainty during which neither regime is clearly applicable, so the Commission must move quickly to eliminate this gap before August 2026.

The transition risk is real and acknowledged by industry. AI-enabled devices are in clinical use today, and patients cannot wait for a multi-year standard-setting process to conclude before the rules are clear. The Commission now has both the mandate and the opportunity to move quickly. Laying down specific AI requirements in Annex I with clear application dates would transform a sound architectural decision into workable rules on the ground .

What's Actually at Stake for Patients and Innovation?

The pipeline of AI-powered medical innovations is substantial. Earlier cancer detection, more accurate glucose monitoring, and personalized surgical planning are all within reach. The potential to improve outcomes, reduce costs, and extend healthcare's reach to underserved populations is real. None of it is realized in a regulatory vacuum .

Manufacturers need legal certainty and an efficient system to bring these innovations to market. Notified bodies need clear criteria for assessing AI components. Patients need assurance that the clinical AI tools used on them today meet a robust set of requirements which are assessed across their entire lifecycle. The Parliament's negotiating position creates the conditions for all three outcomes .

The broader Digital Omnibus package covers significant ground beyond the AI Act, including the General Data Protection Regulation (GDPR) and the Data Act. However, the AI Act and MDR/IVDR intersection is where the stakes for patients and the sector are most immediate. Along with the MDR and IVDR revision, the Digital Omnibus represents Europe's chance to deliver a regulatory system that is both protective and practical, allowing innovation to flourish without compromising safety .

" }