Physicians are embracing artificial intelligence at unprecedented rates, yet many fear the technology could undermine their core clinical abilities. A new survey from the American Medical Association reveals a striking contradiction: while over 81% of doctors now use AI in their practices, 88% worry that increased reliance on these tools could lead to skill loss, particularly among younger physicians. Why Are Doctors Concerned About Losing Skills to AI? The concern isn't abstract. As AI systems handle more diagnostic work, documentation, and clinical decision-making, physicians worry they may lose the hands-on experience needed to develop and maintain critical judgment. This anxiety is especially pronounced among doctors with 10 years or less in practice, who are still building foundational expertise. The issue mirrors a broader phenomenon in medicine: cognitive offloading. When clinicians outsource thinking to technology, they risk atrophying the very skills that make them effective physicians. A high school intern who studied AI in healthcare at Stanford noted that "over-reliance on AI tools can lead to reduced critical thinking or cognitive offloading, especially at a time when critical skills and personal judgment are still developing". This concern extends beyond individual practitioners to the entire medical profession. How Are Physicians Using AI Today, and What Does That Mean for Training? The American Medical Association's 2026 Physician Survey on Augmented Intelligence tracked adoption patterns across the profession. The results show rapid integration of AI into daily workflows, but also reveal where physicians see both promise and peril. - Documentation and Research: The most common physician uses of AI center on medical research summarization and clinical care documentation, tasks that reduce administrative burden but don't directly involve patient diagnosis or treatment decisions. - Diagnostic Support: Physicians increasingly rely on AI for imaging analysis, pathology review, and differential diagnosis generation, areas where skill loss concerns are highest because these are core competencies. - Administrative Automation: AI handles scheduling, billing, and other back-office functions that contribute to physician burnout, with 70% of doctors seeing AI as a tool to automate burnout-inducing tasks. The challenge is that while AI excels at automating routine tasks, it may inadvertently prevent physicians from gaining the experience needed to handle complex or unusual cases. A radiologist who relies on AI to flag abnormalities might struggle when presented with an atypical presentation that the algorithm missed. This creates a vicious cycle: less hands-on experience leads to lower confidence, which increases reliance on AI, which further erodes skill development. The 2026 AMA survey expanded specifically to examine physician training needs and concerns about skill loss, signaling that the medical profession recognizes this as a critical issue requiring attention. John Whyte, MD, MPH, CEO of the American Medical Association, emphasized that "AI has quickly become part of everyday medical practice. Physicians see real promise in its ability to support clinical decisions and cut down on administrative burden. But as this technology advances, it is critical that augmented intelligence be designed to enhance, not replace, physicians". What Safeguards Do Physicians Want to Prevent Skill Erosion? Physicians aren't passively accepting this risk. The survey reveals specific priorities for how AI should be integrated to protect clinical competency and maintain the integrity of medical training. - Shared Decision-Making Authority: 85% of physicians want to be consulted or directly involved in decisions about AI adoption in their workplace, ensuring that implementation doesn't bypass clinical judgment. - Safety and Efficacy Validation: 88% of physicians emphasize robust safety and efficacy validation as critical for broader AI adoption, meaning AI tools must prove they improve outcomes without degrading physician performance. - Clear Liability Frameworks: Physicians rank clear liability frameworks highest among regulatory actions essential to build trust, because they need to understand who is responsible when AI makes an error. - Data Privacy Protections: 86% of physicians prioritize data privacy, recognizing that patient information used to train AI systems must be handled with extreme care. These demands reflect a deeper principle: physicians want AI to function as a true augmented intelligence system, one that enhances human judgment rather than replacing it. The AMA uses the term "augmented intelligence" specifically to emphasize this assistive role, focusing on how AI's design should enhance human intelligence rather than displace it. Steps to Maintain Clinical Skills While Using AI Tools - Deliberate Practice: Physicians should intentionally review cases where AI provided recommendations, comparing the AI's reasoning to their own independent assessment before accepting the suggestion. This maintains diagnostic muscle memory and catches errors. - Selective Automation: Rather than automating all routine tasks, physicians can reserve certain cases for manual review, ensuring they continue to encounter the full spectrum of clinical presentations and maintain pattern recognition abilities. - Peer Collaboration: Structured case discussions and second-opinion protocols, even when AI has already provided input, keep physicians engaged in active clinical reasoning and expose them to diverse perspectives. - Ongoing Education: Medical training programs must evolve to include AI literacy and critical evaluation of algorithmic recommendations, preparing physicians to work effectively alongside these tools without becoming dependent on them. The paradox of AI in medicine is that the same technology promising to reduce physician burden and improve patient outcomes could inadvertently weaken the clinical judgment that makes medicine effective. Confidence in AI tools is growing: in 2026, more than three-quarters of physicians believe AI improves their ability to care for patients, up from 65% in 2023. Yet this confidence must be tempered by vigilance about preserving the human expertise that remains irreplaceable. The medical profession is at a critical juncture. As AI adoption accelerates, the decisions made now about how these tools are implemented will shape whether physicians become more capable or more dependent. The 88% of doctors worried about skill loss aren't being Luddites; they're recognizing a genuine risk that requires intentional safeguards. The future of medicine depends not on choosing between AI and human expertise, but on designing systems where both strengthen each other.