Medical Schools Are Grappling With AI's Timing Problem: When to Use It, When to Hold Back

Medical educators are facing a paradox: artificial intelligence is already embedded in their systems, but deploying it carelessly can actually weaken student skills rather than enhance them. At the 2026 Innovations in Medical Education Conference hosted by the University of Miami Miller School of Medicine, leaders from across the country confronted an urgent question that goes beyond whether to use AI in training doctors, but rather how to use it responsibly and at precisely the right moment in a student's learning journey .

The conference drew 264 registrants, including international attendees from Australia, Brazil, Canada, Germany, Oman, Portugal, Qatar, and Spain, all grappling with the same challenge: AI is moving faster than institutions can safely integrate it . The central tension, as framed by leaders at the event, is that timing and intentionality matter enormously when it comes to whether AI helps or harms medical trainees.

What Happens When AI Is Introduced Too Early in Training?

One of the most striking warnings came from Patrick Tighe, M.D., professor of anesthesiology and associate dean for AI applications and innovations at the University of Florida College of Medicine. Dr. Tighe presented research showing that premature reliance on AI can actually cause what he called "mis-skilling" of learners. He cited a concrete example: Polish endoscopists who used AI tools for just three weeks performed worse at their procedures once the technology was removed .

"Your trainees will use agents. It will become an inevitability. But if these tools are misapplied, we risk cognitive decline. Used at the right time, they can escalate performance beyond what we thought possible. The key is giving AI very specific missions and understanding what we are trying to achieve," said Dr. Tighe.

Patrick Tighe, M.D., Professor of Anesthesiology and Associate Dean for AI Applications and Innovations, University of Florida College of Medicine

This finding challenges the assumption that more AI access equals better learning. Instead, educators need to think strategically about when students should practice with AI assistance and when they should work without it to build foundational competence .

How Should Medical Schools Implement AI Responsibly?

  • Define Specific Missions: Rather than deploying AI broadly, assign it narrow, intentional purposes. AI should support particular learning goals, not replace human judgment or critical thinking in foundational skills.
  • Evaluate Both Learner and Tool Output: Nicholas Tsinoremas, Ph.D., vice provost for research computing and data at the University of Miami, emphasized that educators must be able to assess whether the AI's output is actually helpful. "If you cannot evaluate the output, do not use it," he stated, noting that education is now one of AI's most active innovation zones .
  • Match Innovation With Transparency and Guardrails: Schools need clear policies, faculty training, and ongoing assessment of whether AI is achieving its intended educational outcomes without creating dependency or skill gaps.

The conference also explored whether AI can help develop quintessentially human skills like communication, empathy, and professionalism. The consensus was cautiously optimistic: AI-mediated simulations could help students practice difficult conversations, but only if faculty can evaluate both the learner's performance and the quality of the AI's feedback .

Why Faculty Culture Matters as Much as Technology

Beyond the technical challenges, medical educators emphasized that AI integration is fundamentally a cultural change. Barry Issenberg, M.D., a Miller School professor of medicine and director of the Gordon Center for Simulation and Innovation in Medical Education, noted that at any given institution, faculty members are at vastly different stages of comfort with AI .

"At any point, you may have faculty at very different stages of integration, innovators, early adopters and those just beginning. The first step is reminding everyone of our goal to train learners who will care for patients safely and ethically," explained Dr. Issenberg.

Barry Issenberg, M.D., Professor of Medicine and Director of the Gordon Center for Simulation and Innovation in Medical Education, University of Miami Miller School of Medicine

Rather than imposing top-down mandates to adopt AI tools, Dr. Issenberg advocated for peer-driven adoption where faculty think critically about how to use AI in ways aligned with their educational values. This approach respects the reality that not every tool is right for every context or every stage of training .

Alexis Rossi Aguirre, Ph.D., director of Medbiquitous at the Association of American Medical Colleges (AAMC), added that infrastructure and collaboration across institutions are equally important as the technology itself. "You can have the best AI tool, but if your infrastructure can't use it, it won't matter," she noted .

What Ethical Concerns Are Medical Educators Raising?

Ken Masters, Ph.D., a professor of medical informatics at Sultan Qaboos University in Oman, presented a broader ethical framework that goes beyond classroom concerns. He outlined a spectrum of AI integration, from AI as a simple tool to AI as a collaborator, confidant, or even an authority figure surpassing human judgment .

Dr. Masters raised several real-world risks that medical educators must anticipate. Autonomous AI agents have already caused problems, such as deleting entire email inboxes despite human intervention. Embodied AI agents (physical or virtual representations) are likely to appear in educational and clinical settings soon. And students are already forming relationships with AI tutors and peers, raising questions about the nature of those interactions and their impact on human connection .

"The basics matter. It is right to address them now. AI ethics in medical education must expand to consider society-wide implications, corporate influence and the long-term human-AI relationship. These issues can be addressed if we acknowledge them and begin to address them now on a global scale," stated Dr. Masters.

Ken Masters, Ph.D., Professor of Medical Informatics, Sultan Qaboos University, Oman

The conference also featured hands-on workshops where faculty learned prompt engineering, explored competency-based medical education with AI, and saw live demonstrations of AI-driven educational platforms already in use across health professions programs . These practical sessions underscored that AI integration requires ongoing learning and experimentation, not one-time adoption decisions.

As medical education leaders look ahead, the consensus is clear: the question is no longer whether AI will shape the future of medical training, but how quickly educators can adapt while keeping patient care, competence, safety, and human connection at the center of their decisions. The conversations started at this conference will continue long after, with cross-institutional and cross-disciplinary collaboration essential for navigating this evolving landscape successfully .