Why Doctors Are Asking Patients to Help Design AI Tools That Affect Their Care
Generative AI (GenAI) is being built into tools that help patients understand their health and make treatment decisions, but a major new analysis reveals a critical gap: patients and caregivers are rarely involved in designing these systems from the beginning. Researchers at NORC at the University of Chicago reviewed how GenAI can support patient-centered clinical decision support (PC CDS), a framework that puts patients' values, preferences, and health information at the center of care decisions. The findings highlight six essential needs to make these AI tools trustworthy and effective .
What Exactly Is Patient-Centered Clinical Decision Support?
Patient-centered clinical decision support sounds technical, but it's really about giving patients and caregivers the information and tools they need to make informed health choices alongside their doctors. Unlike traditional decision-support systems built primarily for clinicians, PC CDS tools are designed with patients as active users. These systems combine evidence-based medical knowledge with data that patients generate themselves, such as health tracking information or personal health preferences. The goal is to facilitate shared decision-making, where patients, caregivers, and care teams discuss health information together and reach decisions that align with what patients actually value .
GenAI is opening new possibilities for these tools. Large language models (LLMs), which are AI systems trained on vast amounts of text to understand and generate human language, can help patients understand complex medical information, manage chronic conditions, and communicate more effectively with their doctors. Some tools use conversational agents, essentially AI chatbots, to answer patient questions in plain language. Others monitor symptoms in real time or help personalize treatment options based on individual patient circumstances .
Why Are Patients Being Left Out of the Design Process?
The research team, led by experts including Dr. Prashila Dullabh at NORC, conducted a comprehensive review of how GenAI is currently being applied to patient-centered care. They examined reports from the Agency for Healthcare Research and Quality's Clinical Decision Support Innovation Collaborative, reviewed 53 peer-reviewed sources, and gathered feedback from a 20-member steering committee that included patients, clinicians, researchers, and policymakers. What emerged was a troubling pattern: while GenAI tools for patients are proliferating, the people who will actually use them are often not meaningfully involved in building them .
This matters because patient-centered tools raise distinct expectations around health literacy, self-management, and the ability to engage in shared decision-making with doctors. If patients don't understand how an AI tool works, or if the tool doesn't reflect their actual needs and values, it won't help them make better decisions. In fact, it could harm them.
Six Critical Needs to Make GenAI Patient Tools Trustworthy
The research team identified six foundational requirements that must be addressed for GenAI-supported patient decision tools to work effectively and safely:
- Patient and Caregiver Representation: Patients and caregivers must be engaged and represented in the design and development of these tools from the earliest stages, not consulted after the fact.
- Implementation Science: Researchers need to build a stronger evidence base for how to implement these tools effectively in real-world settings and how to support genuine patient engagement with them.
- Risk-Based Policies: Healthcare organizations need clear, risk-based policies that specify when it's appropriate to use GenAI in patient decision support and when it isn't.
- Independent Testing and Vetting: Third-party organizations should establish and apply consistent criteria for testing and validating GenAI-supported patient tools before they're deployed in clinical settings.
- Ongoing Performance Monitoring: Tools must be periodically reassessed to identify algorithmic drift, a phenomenon where AI systems gradually perform worse over time as real-world data changes, and to verify that performance remains safe and effective.
- Transparency and Consent Policies: Healthcare systems need policies that ensure patients understand when they're interacting with GenAI and can provide informed consent for its use in their care.
How to Build Better AI Tools for Patient Care
Moving forward, developers and healthcare organizations can take concrete steps to ensure GenAI patient tools are designed responsibly and effectively:
- Co-Design With Patients: Include patients and caregivers as equal partners in design teams from the beginning, not as afterthought reviewers. Their lived experience is essential data.
- Test for Real-World Usability: Pilot tools with actual patients in diverse settings, including rural and underserved communities, to ensure they work for everyone, not just tech-savvy users.
- Establish Clear Governance: Create institutional policies that define when GenAI is appropriate for patient-facing applications, who oversees validation, and how often tools are audited for safety and accuracy.
- Build Transparency Into the Tool: Design interfaces that clearly explain to patients when they're using AI, how the AI works in plain language, and what its limitations are.
- Plan for Continuous Improvement: Set up systems to monitor how patients actually use these tools, collect feedback regularly, and update the AI when performance drifts or new evidence emerges.
What Does This Mean for Patients and Doctors?
The implications are significant for both patients and clinicians. When GenAI patient tools are designed well, they can help patients understand their health conditions better, manage chronic diseases more effectively, and participate more actively in decisions about their care. For doctors, these tools can reduce the burden of explaining complex medical information and help ensure that patients' values and preferences are genuinely considered in treatment planning .
However, poorly designed tools can do the opposite. If a patient doesn't understand how an AI system arrived at a recommendation, or if the tool doesn't account for their specific circumstances or cultural values, it could undermine trust in both the AI and the healthcare provider. That's why the research team emphasizes that patient involvement in design isn't just ethically important; it's practically essential for these tools to work.
The healthcare industry is at an inflection point. GenAI has genuine potential to improve patient engagement and health outcomes, but only if the technology is developed with the same care and rigor that we expect from any medical intervention. The research from NORC and its steering committee provides a roadmap for getting this right, starting with a simple but powerful principle: patients should help design the tools that affect their care .
" }