The European Union is weighing two competing proposals that could fundamentally reshape how artificial intelligence in medical devices is regulated, potentially removing systematic safety safeguards that protect patients from AI-specific risks like algorithmic bias and system failures. One proposal aims to streamline overlapping regulations, while the other would exclude medical AI from the EU AI Act's high-risk requirements altogetherâunless the European Commission reintroduces them later through separate acts. What's Driving This Regulatory Shift? For years, the health technology industry has complained about regulatory duplication. Medical AI devices must currently comply with both the EU Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR)âwhich focus on safety and performanceâplus the EU AI Act's requirements for high-risk AI systems, which mandate risk management, transparency, cybersecurity measures, and human oversight. Companies argue this creates unnecessary compliance costs and delays patient access to promising tools. The European Commission has proposed two different solutions. The first, called the "Digital Omnibus," would streamline the process by allowing independent review organizations (called "notified bodies") to assess both sets of requirements in a single review process. The second proposal, led by the Commission's health and food safety division, would go furtherâremoving medical AI from the AI Act's high-risk framework entirely, leaving only MDR/IVDR requirements in place. Why Medical AI Needs AI-Specific Safeguards? Here's where the stakes get high for patients. The MDR and IVDR were designed to evaluate whether medical devices meet predefined safety specificationsâdoes a blood glucose monitor give accurate readings? Does a diagnostic test detect disease reliably? But these frameworks don't address risks unique to artificial intelligence systems. The EU AI Act's high-risk requirements fill that gap by requiring developers to evaluate AI-specific hazards, including: - Demographic Bias: AI systems trained on limited populations may perform poorly for underrepresented groups, potentially leading to misdiagnosis or inappropriate treatment recommendations. - Model Drift: AI systems can degrade over time as clinical practice evolves or patient populations change, yet traditional device regulations don't mandate ongoing monitoring for this type of failure. - Algorithmic Uncertainty: Unlike traditional medical devices with fixed performance characteristics, AI systems operate as "black boxes" where even developers may struggle to explain why the system made a particular recommendation. According to guidance issued by the Medical Device Coordination Group, "the MDR and IVDR requirements address risks related to medical device software, but they do not explicitly address risks specific to AI systems. The AI Act complements the MDR/IVDR by introducing requirements to address hazards and risks to health, safety, and fundamental rights that are specific to AI systems". What Could Patients Lose? If the second proposal passesâremoving medical AI from the AI Act's high-risk frameworkâthe consequences for patients could be substantial. Under current AI Act requirements, developers must ensure that healthcare providers receive appropriate training on system limitations, confidence levels, and when human oversight is necessary. Without these obligations, clinicians would revert to traditional "instructions for use" documents that emphasize intended purpose and performance but say little about algorithmic uncertainty or automation bias. Patients may no longer benefit from systematic checks for population bias, meaning an AI diagnostic tool trained primarily on data from one demographic group could be deployed without rigorous evaluation of how it performs across different populations. Additionally, systems whose performance degrades silently as clinical practice evolvesâa phenomenon called model driftâwould no longer be subject to mandatory monitoring requirements. How to Evaluate Medical AI Safety: What Patients Should Know While regulatory frameworks evolve, patients and healthcare providers can take steps to ensure AI tools are used responsibly: - Ask About Testing: Request information about how the AI system was tested across different patient populations, age groups, and demographic backgrounds to understand whether bias testing was conducted. - Understand Limitations: Ask your healthcare provider what the AI system's confidence level is for your specific diagnosis or recommendation, and whether human clinician review is part of the decision-making process. - Seek Transparency: Inquire whether the AI system's reasoning can be explained in plain language, rather than operating as a complete "black box" that even developers cannot interpret. The Uncertainty Ahead Both proposals remain subject to negotiation by the Council of the European Union and the European Parliament, so their final form is uncertain. However, they signal a fundamental policy debate: should medical AI remain subject to systematic AI Act safeguards by default, or should those protections apply only selectively in the future, based on decisions made by the European Commission ? For AI medical device developers, the uncertainty cuts both ways. Companies that have invested in robust governance structures, bias mitigation, and explainability may find themselves competing against products optimized for minimal compliance. For clinicians, the loss of AI literacy requirements means they'll be expected to use AI safely and manage edge cases without the regulatory guarantee that systems are designed to support meaningful human oversight. And for patients, the removal of systematic AI-specific safeguards weakens protections precisely where AI introduces new failure modes that traditional medical device regulations were never designed to address.