Patients in Europe now have a legal right to understand why an AI system recommended a particular medical decision, but neither the law nor current practice specifies what a meaningful explanation actually looks like in the clinic. The European Union's AI Act, which entered force in August 2024, requires that deployers of high-risk AI systems provide affected individuals with clear and meaningful explanations of decisions shaped by these systems. This creates a powerful legal foundation for patient autonomy, yet it immediately raises a question the law alone cannot answer: what does a meaningful explanation of an AI medical decision actually look like when a radiologist has 10 minutes per patient and the AI model itself contains millions of interacting parameters that even its creators cannot fully trace? What's the Real Problem With AI Explanations in Healthcare? Consider a common scenario: an AI system flags a nodule on a lung scan and assigns it an 87% malignancy probability. The radiologist receives a confidence score and a heat map showing which parts of the image the algorithm focused on. When the patient asks, "Why does the computer think it's cancer?" the radiologist realizes the output tells her what the AI concluded but not why or how to explain it to her patient in a way that makes sense. The technical challenge is fundamental. The most accurate AI models generate outputs through millions of interacting parameters in ways that even their developers cannot fully trace. Requiring greater transparency can push developers toward simpler, more interpretable models that sacrifice diagnostic accuracy, creating a direct trade-off with patient care outcomes. Current explainable AI methods only partially address this gap. Saliency maps show which regions of an image the algorithm weighted most heavily, but not what it understood about them. Feature importance rankings indicate which variables mattered, but not how they interacted or why particular thresholds were significant. These post hoc approximations attempt to reconstruct reasoning after the fact, and they can produce plausible-sounding explanations that do not accurately reflect the model's internal logic. Even where technical explanations exist, delivering them in clinical practice faces multiple barriers. Clinicians typically receive AI outputs as confidence scores and recommendations rather than reasoning, meaning they may be asked to explain a decision they do not fully understand themselves. Clinicians already struggle to find time for thorough clinical conversations, and AI adds another layer of complexity to encounters that are already stretched. Automation bias compounds this problem: clinicians may defer to algorithmic recommendations even when these conflict with their own clinical judgment. A prospective study of radiologists reading mammograms found that incorrect AI suggestions pulled readers toward an incorrect diagnosis regardless of their level of experience. Can Patients Actually Use the Explanations They Receive? Even if clinicians provide technically accurate explanations, understanding is far from guaranteed. Between 22% and 58% of EU citizens report difficulty accessing, understanding, appraising, and applying the health information they need to navigate healthcare services, with pronounced gaps among older adults, lower socioeconomic groups, and rural communities. Interpreting AI outputs requires statistical and technical literacy that even a high general education does not guarantee. Many highly educated individuals struggle with medical statistics and probability statements, meaning technically accurate explanations may create barriers regardless of educational background. The paradox deepens when you consider what actually helps patients make informed decisions. Research on medical decision-making suggests that excessive technical information can lead to cognitive overload, causing patients to defer to physician authority rather than engage with the explanation. To participate meaningfully in decisions, what patients typically need is not a description of how an algorithm works, but clarity on what is most relevant for their own situation. There is also a legal ambiguity that undermines patient protections. Under the EU's General Data Protection Regulation (GDPR), Article 22 provides safeguards against decisions "based solely on automated processing," including the right to meaningful information about the logic behind those decisions. However, most clinical AI does not meet Article 22's threshold, since a human clinician typically remains the formal decision maker. This creates a paradox: the human oversight meant to protect patients may also reduce their legal claim to explanation, since the decision may no longer be considered purely automated. How to Bridge the Gap Between Legal Rights and Practical Understanding - Developer Responsibility: Design explanation systems with patient input from the outset, testing comprehension with actual patients rather than demonstrating compliance with legal standards alone. Drawing on principles from shared decision-making and risk communication, a useful patient-facing explanation could include what the system is recommending and for what decision point; how confident it is and what that confidence means in practical terms; relevant key limitations, such as known performance gaps in specific populations; and viable alternative options. - Healthcare Institution Support: Allocate time for AI discussions, train staff to support patients in navigating AI-driven recommendations, and establish clear protocols that shift explanation from a compliance exercise toward genuine patient understanding. This reframes the goal from technical transparency to decision-relevant clarity, with effectiveness measurable by whether patients can answer key questions after an encounter. - Policy and Literacy Investment: Policymakers could support this by developing standards focused on comprehension rather than mere disclosure, and by investing in digital health literacy programs that help patients navigate AI-driven healthcare decisions. - Patient Involvement in Design: Involving patients in the design of explanation systems from the start would strengthen all of these efforts, since explanation approaches tend to reflect what developers and regulators consider important, which may not always align with what patients need to know. The core insight is this: compliance with the law's requirement to "provide an explanation" is not the same as ensuring patients can actually use that explanation to make informed decisions about their care. The EU AI Act and GDPR have created a legal foundation for patient autonomy, but closing the gap between legal rights and practical understanding requires a fundamental shift in how healthcare institutions, AI developers, and policymakers approach the problem. It is not enough to ask "Was an explanation provided?" The real question is "Can patients use it?" " }