The gap between what people are actually using for mental health support and what regulators are trying to control is widening fast. More than one-in-three adults in the UK (37 percent) have used an AI chatbot to support their mental health or wellbeing, according to new guidance from the NHS Confederation. But here's the catch: 66 percent of those people are using general-purpose chatbots such as ChatGPT, Claude, or Meta AI rather than platforms specifically designed to provide mental health support. This disconnect between consumer behavior and clinical oversight is forcing policymakers to rethink how AI regulation actually works in healthcare. Why Are People Using Unregulated AI Tools for Mental Health? The answer is simple: availability and convenience. While the NHS works to develop and validate clinical-grade AI tools, millions of people are already turning to consumer chatbots for immediate support. These general-purpose tools aren't designed for mental health, aren't subject to medical device regulations, and often lack evidence of clinical effectiveness. Yet people are using them anyway, sometimes because they're free, sometimes because they're available 24/7, and sometimes because they don't know the difference between a regulated clinical tool and a consumer product. This creates a genuine safety concern. The NHS Confederation's new guidance explicitly acknowledges that "there are concerns about the safety and effectiveness of AI in personal use, which has rightly increased scrutiny on how AI is being used in the health and care system." The organization emphasizes the need for an important distinction between three different categories of AI tools: those built for mental health care in the NHS that are subject to standards and regulation; consumer products for wellbeing often with less regulatory oversight; and AI tools not intended for mental health care but used by people with mental health challenges. What Standards Will NHS Clinical AI Actually Meet? The NHS is taking a fundamentally different approach to AI regulation than the consumer market. Clinical AI, as defined by the NHS Confederation, is artificial intelligence specifically designed to support core clinical functions such as diagnostics, therapeutic delivery, risk monitoring, and treatment planning. These tools must meet four rigorous criteria that consumer chatbots typically don't. - Evidence-based validation: Supported by peer-reviewed studies with substantial sample sizes conducted in real clinical settings with actual patients, not wellness applications or uncontrolled environments - Patient-focused design: Built for direct use by patients receiving medical care within healthcare systems, not as general consumer products - Clinical-grade standards: For class IIa and above medical devices, adherence to stringent regulations and quality management standards including ISO 13485, ISO 14971, IEC 62304, ISO 27001, and GDPR compliance - Clinical workflow integration: Designed to enhance healthcare delivery and support clinical decision-making, not just streamline administrative processes like notetaking or scheduling This regulatory framework is being developed in response to the UK government's 10 Year Health Plan, which sets out an ambitious timeline for AI integration in the NHS. The Medicines and Healthcare products Regulatory Agency (MHRA) will publish a new regulatory framework for AI in healthcare in 2026, informed by the work of the National Commission into the Regulation of AI in Healthcare. How to Distinguish Between Clinical AI and Consumer Wellness Tools - Regulatory oversight: Clinical AI tools used in the NHS are subject to medical device regulations and quality management standards; consumer chatbots typically operate with minimal regulatory oversight - Clinical evidence: NHS-approved tools must have peer-reviewed validation from real clinical settings; consumer tools often lack published evidence of effectiveness for mental health conditions - Design purpose: Clinical AI is built specifically for diagnosed conditions and therapeutic delivery; consumer tools are designed for general wellbeing and may not be appropriate for people with serious mental health conditions - Safety monitoring: Clinical tools include risk monitoring and treatment planning features; consumer chatbots typically lack these safeguards The distinction matters because mental health conditions are complex. They require multidisciplinary care delivery and sensitive clinical conversations that demand specialized AI design. A general-purpose chatbot trained on internet text may provide supportive responses, but it cannot diagnose conditions, monitor risk, or adjust treatment plans the way clinical-grade AI can. One NHS mental health trust chief information officer captured the potential of properly designed clinical AI: "The future opportunity presented by clinical AI is that someone with a mental health condition can stay supported 24/7, rather than just when they have an appointment." This vision requires the kind of rigorous validation and integration that consumer tools simply don't provide. information officer What's the Timeline for NHS AI Deployment? The NHS has set specific milestones for AI integration across mental health and broader healthcare services. From 2025 to 2028, investment in AI infrastructure will include the development and implementation of an NHS AI strategic roadmap, enabling clearer ethical and governance frameworks for AI and rollout of new AI upskilling programs for the workforce. By 2027, the NHS plans to roll out validated AI diagnostic tools and NHS-wide deployment of AI administrative tools including AI scribes. By 2035, AI is expected to be seamlessly integrated into most clinical pathways, with generative AI tools widely adopted and the NHS positioned as a global leader in deploying AI ethically. This timeline reflects the complexity of the task. Developing clinical-grade AI isn't just about building technology; it requires building evidence, training staff, integrating systems, and establishing governance frameworks that protect patient safety while enabling innovation. The 2026 MHRA regulatory framework will be a critical milestone, as it will provide clarity on what clinical AI in healthcare actually needs to meet to be approved for use in the NHS. The broader challenge is that while the NHS builds its regulated clinical tools, millions of people are already using unregulated consumer AI for mental health support. This creates a two-tier system where those who can access NHS services get clinically validated tools, while others rely on general-purpose chatbots with unknown safety profiles. Closing this gap will require not just better regulation, but also better public understanding of the difference between clinical AI and consumer wellness tools. The NHS Confederation's new guidance is a step toward that clarity, but the real test will come when clinical-grade tools are finally deployed and people have to choose between the familiar consumer tools they're already using and the new regulated alternatives the NHS is building.