Healthcare systems are deploying AI tools at breakneck speed, but most lack clear ethical guidelines to prevent harm. More than 1,200 artificial intelligence (AI)-enabled medical devices have received FDA approval in the United States, yet only 8% have a plan in place to monitor how their product performs after approval. This regulatory gap has already led to serious consequences: AI algorithms have incorrectly deprioritized care for Black patients due to racial bias, and elderly patients have been denied insurance coverage for procedures they would otherwise have received. The problem stems from what experts call a "pick and mix approach" to AI adoption in healthcare. Rather than implementing comprehensive guidelines, hospitals and health systems have been making up rules on the fly as they deploy new technologies. Annika Schoene, an assistant professor of public health and health sciences at Northeastern University whose work focuses on AI safety, warns that this ad hoc strategy is unsustainable. "We use very little regulation when it comes to AI," Schoene said. "I'm telling you right now, if we don't get to grips with it, good luck". What's Driving the Lack of AI Oversight in Healthcare? Healthcare organizations have embraced AI for compelling reasons. The technology helps clinicians save time by detecting patterns in X-rays and electronic medical records, and it supports initial mental health screenings. However, the speed of adoption has far outpaced the development of safety frameworks. Many hospitals implemented AI tools to reduce workloads for staff already stretched thin, without pausing to ask critical questions about how the technology might fail or harm patients. The challenge is particularly acute because AI literacy remains low among healthcare workers. Clinicians and hospital administrators often lack the technical knowledge to evaluate whether an AI tool is safe, fair, or appropriate for their patient population. Meanwhile, computer scientists and engineers building these systems may not fully understand the clinical context or ethical implications of their work. How Can Healthcare Systems Build Ethical AI Practices? Schoene and her research team at Northeastern University are developing a universal guide for ethical AI use in healthcare. The project brings together computer scientists, public health researchers, ethicists, and healthcare workers to answer a fundamental question: How do you teach technical experts about ethics and doctors, who are already ethically trained, about technology ? The guide will serve as a reference document at every stage of AI adoption, from initial purchasing decisions to ongoing monitoring after deployment. Here's how healthcare systems can use this framework: - Pre-Purchase Evaluation: Before buying an AI-integrated tool like a breast cancer detection system, hospital administrators would consult the guide to verify the software meets a set of ethical standards still being developed. - Implementation and Monitoring: Once installed, hospital IT workers could use the same guide to understand what they need to monitor going forward, such as privacy settings around patient data and whether the tool performs equally well across different demographic groups. - Ongoing Accountability: The guide will help healthcare workers ask critical questions upfront about any AI tool, whether they're implementing it or using it, creating a culture of informed skepticism rather than blind adoption. Cansu Canca, director of responsible AI practice at Northeastern's Institute for Experiential AI, explained the core challenge: "In AI ethics, we refer to various values and ethical goals but often they remain aspirational rather than operational. What we aim to do is to provide a framework where these values can be turned into design and development decisions as well as monitoring requirements, in the technical language that developers understand, while being grounded in the domain knowledge that is provided by health care experts". Why Does This Matter for Patient Safety? The stakes are high. Without clear ethical guidelines, the more than 1,200 AI-enabled medical devices currently approved by the FDA present potential security and health risks. Real-world examples illustrate the danger: AI systems have exhibited racial bias in patient prioritization, and algorithmic decision-making has resulted in insurance denials for elderly patients. These aren't theoretical risks; they're happening now in hospitals across the country. The research team, which includes Robert Leeman, chair of Northeastern's public health and health sciences program, Northeastern associate clinical professor Michael Bessette, and Agata Lapedriza, a principal research scientist with the Institute for Experiential AI, is working directly with large healthcare systems to understand what workers actually need to learn about AI. This ground-level approach ensures the guide will be practical and actionable, not just academically sound. Schoene acknowledged that the guide itself will be a "living document" that evolves as AI technology advances. The field is moving so quickly that even computer scientists struggle to keep pace. Rather than creating a static rulebook, the team plans to continuously update and adapt the framework based on new developments and real-world feedback from healthcare systems. The ultimate goal is to equip healthcare workers with the knowledge and confidence to push back on inappropriate AI implementations or to speak knowledgeably with clinicians about what a technology should and shouldn't be doing. As Schoene put it: "Hopefully this blueprint will, in some way, shape or form, equip some technical person in the health care system to either push back or know at the end of the day how to speak to a clinician as to what the technology should be or shouldn't be doing". For a healthcare industry that has adopted AI tools with minimal guardrails, this universal ethics guide represents a critical step toward ensuring that artificial intelligence improves patient care rather than introducing new forms of harm.