Europe's AI Regulation Is Reshaping Medical Device Approval, But the FDA Is Falling Behind
Europe's approach to regulating artificial intelligence in healthcare is outpacing the United States, creating a significant gap in how quickly AI-powered medical devices reach patients. The EU AI Act, which began phased enforcement in 2024, has established a clear pathway for evaluating AI systems used in medical screening as high-risk technologies, while the FDA lacks an equivalent mechanism to assess how humans and AI algorithms work together as a unified diagnostic system .
How Is the EU AI Act Changing Medical Device Regulation?
The EU AI Act classifies AI systems used in medical screening under Annex III as high-risk, but crucially, it creates a structured conformity assessment pathway that explicitly accommodates post-market performance data as part of ongoing compliance. This framework allows regulators to evaluate system-level performance, not just the algorithm in isolation. The CE marking pathway for AI-augmented mammography tools has already moved forward in Europe precisely because the regulatory architecture anticipated the need to assess how AI and radiologists perform together .
This contrasts sharply with the FDA's current approach. The agency's 2021 artificial intelligence and machine learning action plan committed to developing good machine learning practice standards and frameworks for algorithm transparency, but those commitments remain prospective rather than actionable. The FDA evaluates the algorithm and its intended use separately, but does not currently require or formally describe how sponsors should design clinical evidence to characterize combined human-plus-AI performance as the regulated entity .
What Evidence Gap Is Holding Back US Approvals?
The tension between regulatory frameworks became visible following the MASAI trial, a randomized study conducted in Sweden that compared standard double reading by two radiologists against AI-supported single reading by one radiologist plus an algorithm. The follow-up data, published in The Lancet in April 2026, demonstrated that the AI-augmented arm achieved cancer detection accuracy beyond what two radiologists working independently could accomplish, while also reducing radiologist workload .
This finding challenges a fundamental assumption in medical regulation: that more expert review always equals better outcomes. The MASAI results show that one radiologist plus a well-validated algorithm outperformed two radiologists. The algorithm is not merely assisting the clinician; it is compensating for cognitive limitations in sequential human review. That reframing changes the regulatory question entirely, transforming the device from a tool that supports a clinician into a system that, in combination with a clinician, constitutes a superior diagnostic approach .
Yet the FDA's current intake process cannot easily accommodate this evidence. A sponsor bringing an AI-augmented screening tool to the US market under a 510(k) or De Novo pathway faces unpredictable outcomes. The FDA's guidance on clinical decision support software draws a line between software that informs a clinician's independent judgment and software that replaces it. When an AI-plus-human system outperforms a human-plus-human system, that distinction becomes clinically meaningless, but the regulatory framework has no mechanism to process that reality .
Steps for Understanding the Regulatory Divide
- FDA Framework Limitation: The FDA evaluates algorithms and intended use separately, without a formal requirement to characterize combined human-plus-AI performance as a single regulated entity, creating a gap between clinical evidence and regulatory authorization.
- EU Structural Advantage: The EU AI Act's conformity assessment pathway explicitly accommodates post-market performance data and system-level evaluation, allowing regulators to approve AI-augmented medical tools based on how they perform in real clinical practice.
- Commercial Impact: Companies like iCAD and Lunit have FDA clearance for narrower intended uses than their clinical evidence supports, while their European authorizations reflect broader system-level performance claims enabled by the EU's regulatory structure.
The radiologist shortage adds urgency to this regulatory gap. The American College of Radiology has documented a persistent and worsening shortfall in breast imaging specialists, particularly in rural and underserved markets. AI-augmented reading systems could help address this capacity crisis, but only if regulators can approve them based on the evidence that demonstrates their value .
Companies navigating this tension are already making strategic decisions based on regulatory geography. iCAD, whose AI-powered mammography analysis platform holds FDA clearance for detection assistance, has been constrained by a narrower intended use than the clinical evidence now supports. Lunit, a South Korean AI medical imaging company whose INSIGHT MMG product holds both FDA clearance and CE marking, has published peer-reviewed data showing sensitivity improvements in breast cancer detection. Neither company's US label currently reflects the kind of system-level performance claim that MASAI's randomized data would support .
"The European regulatory posture is instructive by contrast. The EU AI Act, which began phased enforcement in 2024, classifies AI systems used in medical screening as high-risk under Annex III, but it also creates a structured conformity assessment pathway that explicitly accommodates post-market performance data as part of ongoing compliance," explained Moe Alsumidaie, Chief Editor of The Clinical Trial Vanguard.
Moe Alsumidaie, Chief Editor, The Clinical Trial Vanguard
What the MASAI follow-up data now demands is a specific, time-bound regulatory response from the FDA: a guidance document or final rule that defines how sponsors should design and submit clinical evidence for AI-augmented diagnostic systems evaluated as human-algorithm dyads. The evidence standard should not require a new MASAI-scale trial for every product, but it should define the minimum evidentiary bar for a system-level performance claim, including pre-specified reader study designs, reference standard requirements, and the statistical thresholds that distinguish support from system superiority .
The broader implication extends beyond mammography. As AI tools move into other diagnostic and screening contexts, the FDA's structural gap will become more visible. Europe's regulatory architecture, by contrast, was designed to anticipate exactly this kind of human-algorithm integration. The result is not just a difference in approval timelines; it is a difference in which patients get access to validated AI-augmented diagnostic systems first, and which markets become the proving ground for the next generation of clinical AI tools .