Medical schools are racing to teach future clinicians how to use AI responsibly before they enter practice, recognizing that AI tools can generate convincing but entirely fabricated medical information. Duke University's new seminar for physician assistant students represents a shift from passive learning about AI to active, critical evaluation of what these tools actually produce in clinical settings. Why Are Medical Schools Suddenly Teaching AI Skepticism? Nicholas Hudak, PhD, PA-C, a professor in family medicine and community health at Duke, designed the "Artificial Intelligence and Evidence-Based Medicine: A Seminar" after noticing a troubling pattern. His neurology patients were arriving with information they'd found through AI systems, and his students were beginning to incorporate AI into their clinical reasoning without understanding its limitations. "I realized that we needed to be proactive in preparing future clinicians and current clinicians to use these tools responsibly because this information could be used to influence decision making," Hudak explained. The core problem is that AI language models, which are trained on vast amounts of text data, can sometimes "hallucinate" and produce fake studies, incorrect citations, or plausible-sounding but entirely fabricated medical information. For clinicians who might rely on these outputs to inform patient care decisions, the stakes are high. Unlike a calculator that either gives you the right answer or the wrong one, AI can give you something that looks right but is dangerously wrong. What Does This New Training Actually Teach? The seminar, which piloted in Spring 2025 as part of Duke's second-year physician assistant program, takes a hands-on approach. Rather than lecturing about AI theory, students actively use AI platforms to complete clinical tasks, then compare their results to traditional evidence-based medicine methods. "Students compare the long form work that they did to the short form work the AI platform produced and use that to determine that platform's accuracy," Hudak said. The course structure includes several key components designed to build practical AI literacy: - Recorded Presentation: Students watch an introductory video explaining how AI technology works and reviewing current platforms available in healthcare settings. - Required Reading: Participants study scholarly material on the intersection of AI and healthcare, focusing on how AI can support clinical decision-making while understanding its limitations. - Hands-On Practice: Students use AI tools to work through clinical scenarios, paying close attention to the sources these platforms cite when generating responses. - Group Reflection: At the end, students share what they learned and discuss how to model responsible AI use for their future patients. Hudak uses an analogy to help students understand the goal: "An analogy that I tell my students is that we're playing in the sandbox. We're not going to master AI, but we are testing out the tools and gaining some exposure". The seminar is supported by an AI Jump Start Grant from Duke Learning Innovation and Lifetime Education (LILE). How Can Clinicians Verify AI-Generated Medical Information? The seminar teaches students to apply the traditional evidence-based medicine cycle to AI outputs. This means gathering the best available evidence, critically appraising the literature, and applying research findings to clinical scenarios. When AI provides information, students learn to ask: Where did this come from? Can I verify the sources? Does this match what I know from peer-reviewed literature? This critical approach is essential because AI systems don't inherently understand medical accuracy the way a trained physician does. They're pattern-matching systems that can produce grammatically perfect, medically plausible text that is nonetheless incorrect. By teaching students to treat AI outputs as a starting point for investigation rather than a final answer, educators are building a generation of clinicians who will use these tools as assistants, not authorities. "We want to be trusted partners for our patients and trusted messengers of scientific information, but also we need to always be critiquing the trustworthiness of the information we're looking at online, what we hear from people in our lives, and certainly what patients are hearing from clinicians," Hudak emphasized. Is This Training Spreading Beyond Duke? While the course was designed specifically for physician assistant students, Hudak actively encourages collaboration with educators in other health professions. "Health care is delivered by teams of clinicians, and for me, it's very important for us to learn from each other and how we're teaching our learners," he noted. This collaborative approach reflects a broader recognition that AI literacy isn't a specialty skill; it's becoming a fundamental competency for all clinicians. The timing of this educational shift is significant. Many medical schools, nursing programs, and other healthcare training institutions are still figuring out how to incorporate AI into their curricula. Duke's seminar offers a practical model: don't wait for AI to be perfect or fully understood; teach students to use it critically now, while they're still in training and can learn from mistakes in a controlled environment. For medical students and residents entering practice in 2025 and beyond, this kind of training may become as essential as learning to read an X-ray or interpret lab results. The question is no longer whether AI will be part of clinical practice, but whether clinicians will be trained to use it responsibly.