The European Union's AI Act requires healthcare providers to explain AI-driven medical recommendations to patients, but a critical gap exists between legal compliance and real-world understanding. When a radiologist tells a patient that an AI system flagged a lung nodule with 87% confidence of cancer, the patient often cannot grasp what that means or how to use it to make informed decisions about their care. The law says explanations must be provided; it does not say they must be comprehensible. This tension sits at the heart of a new challenge facing European healthcare systems as the AI Act, which entered force in August 2024 with obligations phasing in through 2027, reshapes how medical AI is deployed and explained. The regulation gives patients legal grounds to seek explanations under both the AI Act and the General Data Protection Regulation (GDPR), a privacy law that also covers automated decision-making. Yet neither framework specifies what a "meaningful" explanation actually looks like in clinical practice, leaving hospitals, doctors, and AI developers scrambling to figure out how to comply. Why Current AI Explanations Fall Short in the Clinic? The technical challenge is formidable. The most accurate medical AI systems operate through millions of interacting parameters in ways that even their creators cannot fully trace. A radiologist reviewing an AI recommendation receives a confidence score and perhaps a heat map showing which parts of an image the algorithm focused on, but these outputs reveal what the AI concluded, not why or how to explain it to a patient. Current explainable AI methods only partially bridge this gap. Saliency maps show which regions of an image the algorithm weighted most heavily, but not what the system understood about them. Feature importance rankings indicate which variables mattered, but not how they interacted or why particular thresholds were significant. These post hoc approximations attempt to reconstruct reasoning after the fact, and they can produce plausible-sounding explanations that do not accurately reflect the model's internal logic. Even when technical explanations exist, delivering them in clinical practice creates additional barriers. Clinicians typically receive AI outputs as recommendations rather than reasoning, meaning they may be asked to explain a decision they do not fully understand themselves. Clinicians already struggle to find time for thorough conversations with patients, and AI adds another layer of complexity to encounters that are already stretched for time. What Stops Patients From Understanding Medical AI Explanations? The literacy challenge is equally significant. Between 22% and 58% of EU citizens report difficulty accessing, understanding, appraising, and applying the health information they need to navigate healthcare services, with pronounced gaps among older adults, lower socioeconomic groups, and rural communities. Interpreting AI outputs requires statistical and technical literacy that even a high general education does not guarantee. Many highly educated individuals struggle with medical statistics and probability statements, meaning technically accurate explanations may create barriers regardless of educational background. Automation bias compounds the problem. When clinicians defer to algorithmic recommendations even when these conflict with their clinical judgment, the explanation they deliver may reflect the AI's conclusion rather than an independent clinical assessment. A prospective study of radiologists reading mammograms found that incorrect AI suggestions pulled readers toward an incorrect diagnosis regardless of level of experience. An explanation delivered by a clinician who has already deferred to the algorithm challenges the assumption in both legal frameworks that human oversight guarantees meaningful review. Providing more technical detail is unlikely to help. Research on medical decision-making suggests that excessive technical information can lead to cognitive overload, causing patients to defer to physician authority rather than engage with the explanation. To participate meaningfully in decisions, what patients typically need is not a description of how an algorithm works, but clarity on what is most relevant for their own situation. How to Design AI Explanations That Patients Can Actually Use - Involve Patients Early: Developers should design explanation systems with patient input from the outset, testing comprehension with actual patients rather than demonstrating compliance with legal standards alone. Patient advocates have highlighted that explanation approaches tend to reflect what developers and regulators consider important, which may not always align with what patients need to know. - Focus on Decision-Relevant Clarity: Drawing on principles from shared decision-making and risk communication, a useful patient-facing explanation could include what the system is recommending and for what decision point; how confident it is and what that confidence means in practical terms; relevant key limitations, such as known performance gaps in specific populations; and viable alternative options. This reframes the goal from technical transparency to decision-relevant clarity, with effectiveness measurable by whether patients can answer key questions after an encounter. - Allocate Time and Training: Health care institutions can bridge this gap by allocating time for AI discussions, training staff to support patients in navigating AI-driven recommendations, and establishing clear protocols that shift explanation from a compliance exercise toward genuine patient understanding. Policy makers could support this by developing standards focused on comprehension and investing in digital health literacy programs. The core issue is that the EU AI Act and GDPR establish that patients have legal grounds to seek explanations of AI-driven medical recommendations, but these protections are stronger in principle than in practice. Many clinically deployed medical AI systems will fall into the AI Act's high-risk category, particularly where they are regulated as medical device software or serve as safety components of regulated products. For cases where the AI Act does not directly apply, patients may draw on the GDPR. However, most clinical AI does not meet the GDPR's threshold for purely automated decisions, since a human clinician typically remains the formal decision maker. This creates ambiguity: the human oversight meant to protect patients may also reduce their legal claim to explanation. The challenge ahead is shifting from compliance to effectiveness. Regulators, developers, and healthcare institutions must move beyond asking "Was an explanation provided?" to asking "Can patients use it?" Without this shift, the EU's landmark AI regulation risks creating a false sense of patient protection while leaving people unable to make truly informed decisions about their medical care.