Memorial Sloan Kettering Cancer Center released a comprehensive \"Responsible Development and Use of Artificial Intelligence (AI) Policy\" in June 2025, signaling a major shift in how leading healthcare institutions are approaching AI governance. As hospitals increasingly deploy AI systems for diagnosis, treatment planning, genetic screening, and patient monitoring, the need for clear ethical frameworks has become urgent. MSK's policy represents one of the first major institutional attempts to codify responsible AI practices across an entire medical center, offering a blueprint for other healthcare organizations wrestling with similar challenges. \n\nThe timing of MSK's policy reflects a broader transformation happening in healthcare. Researchers and clinicians are turning to computer-based systems to answer critical medical questions at an accelerating pace. These applications span multiple domains, from identifying which patients will respond to specific treatments to automating administrative and research tasks. Yet without clear governance structures, hospitals risk deploying AI systems that may introduce bias, reduce transparency, or undermine clinical decision-making. \n\nWhat Problems Is Healthcare AI Actually Solving Right Now? \n\nThe practical applications of AI in medical settings have expanded far beyond early proof-of-concept projects. Healthcare institutions are now using AI tools across several key areas that directly impact patient care and operational efficiency. Understanding these real-world use cases helps explain why governance frameworks like MSK's have become essential. \n\n \n - Diagnostic Support: AI systems assist clinicians in analyzing medical imaging, pathology slides, and other diagnostic data to identify diseases earlier and more accurately than traditional methods alone. \n - Treatment Response Prediction: Machine learning models help oncologists and other specialists predict how individual patients will respond to specific therapies, enabling more personalized treatment planning. \n - Genetic Screening and Risk Assessment: AI tools analyze genetic data to identify patients at high risk for hereditary cancers and other conditions, allowing for preventive interventions. \n - Patient Monitoring and Care Coordination: Continuous AI-powered monitoring systems track patient vital signs and alert clinicians to potential complications before they become critical. \n - Research Acceleration: AI tools help researchers keep pace with the exponential growth in medical literature, identifying relevant studies and extracting key findings automatically. \n - Administrative Efficiency: Generative AI systems assist with writing, editing, and administrative tasks, freeing clinicians to focus on patient care. \n \n\nThe breadth of these applications explains why MSK's AI Governance Committee felt compelled to develop a comprehensive policy. Without clear guidelines, individual departments might adopt AI tools without considering downstream risks or ethical implications. \n\nHow to Implement Responsible AI Governance in Your Healthcare Organization \n\nFor hospital leaders and IT teams considering how to approach AI governance, MSK's framework offers several practical lessons. The policy development process itself reveals key steps that other institutions should consider when building their own AI oversight structures. \n\n \n - Establish a Governance Committee: Create a dedicated committee with representation from clinical, technical, ethical, and administrative perspectives to oversee AI development and deployment across the organization. \n - Define Clear Policies Before Deployment: Develop written policies that specify how AI systems should be developed, tested, validated, and monitored before they are used in clinical settings. \n - Create Accessible Resources for Staff: Build comprehensive guides, glossaries, and educational materials to help clinicians and researchers understand AI terminology and best practices specific to healthcare applications. \n - Maintain Transparency About Tool Use: Establish clear documentation of which AI tools are approved for use, what they are designed to do, and what their limitations are. \n - Provide Ongoing Education and Support: Offer training programs, podcasts, and expert forums where staff can learn about AI developments and discuss implementation challenges. \n \n\nMSK's approach demonstrates that governance is not a one-time policy document but an ongoing institutional commitment. The center has created multiple resources to support this effort, including educational materials, curated news coverage, and expert forums. \n\nWhy Are Healthcare Institutions Suddenly Focused on AI Governance? \n\nThe urgency around AI governance in healthcare stems from several converging factors. First, the capabilities of AI systems have advanced rapidly, making them increasingly useful for clinical decision-making. Second, the stakes are high; errors in AI-assisted diagnosis or treatment planning can directly harm patients. Third, regulatory bodies and accreditation organizations are beginning to expect healthcare institutions to have formal AI oversight structures in place. \n\nMSK's policy also reflects growing awareness that AI governance is not primarily a technical problem but an institutional and ethical one. The policy addresses questions that go beyond algorithm performance: How should hospitals ensure that AI systems are fair and unbiased across different patient populations? How should clinicians be trained to use AI tools appropriately? What happens when an AI system makes a recommendation that conflicts with a clinician's judgment? These questions require input from ethicists, clinicians, administrators, and technical experts working together. \n\nThe resources MSK has assembled to support its policy implementation offer a window into what comprehensive AI governance looks like in practice. The center has curated news coverage from leading sources including MIT Technology Review, JAMA Networks, and specialized medical AI publications. It has also created glossaries of AI terminology tailored to specific domains like radiology and generative AI. Additionally, MSK hosts expert forums and podcasts where clinicians and researchers can discuss the implications of AI in medicine. \n\nFor healthcare organizations still in the early stages of AI adoption, MSK's framework suggests that governance should not be an afterthought. Instead, institutions should establish clear policies and oversight structures before deploying AI systems widely. This approach reduces the risk of unintended consequences and helps ensure that AI tools enhance rather than undermine clinical practice. As AI continues to transform healthcare, institutions that invest in thoughtful governance now will be better positioned to realize the benefits while managing the risks. "\n}