Why Britons Don't Trust AI Doctors Yet, Even as the UK Invests £1.6 Billion

The UK is betting big on artificial intelligence to transform healthcare, yet a significant trust gap threatens to undermine these ambitious plans. While the government commits £1.6 billion to AI research over the next four years, new polling from the Health Foundation shows that public confidence in AI-generated medical advice remains fragile. Only 49% of the public say they would use a proposed AI-powered "Doctor in Your Pocket" feature for non-urgent care, and nearly one-third (32%) explicitly said they would not use it .

This disconnect between government investment and public hesitation reveals a fundamental challenge facing healthcare leaders: building trust in AI systems before rolling them out at scale. The stakes are high, as the NHS prepares to expand its digital app with AI capabilities as part of the government's 10-Year Health Plan.

What's Driving Public Caution About AI in Healthcare?

The Health Foundation's third annual health tech tracker surveyed 8,000 members of the public and more than 2,000 NHS staff to understand attitudes toward expanded NHS App features. The results paint a clear picture: people are comfortable with traditional digital health functions but hesitant when algorithms enter the picture .

When asked about basic NHS App functions, support is strong. Around three-quarters of the public would happily use the app to book hospital appointments (76%), choose a preferred hospital (73%), and access information about procedures (73%). But the moment AI-generated advice enters the equation, enthusiasm drops sharply. The gap between traditional digital services and AI-powered services suggests that the public views algorithmic decision-making in healthcare differently from simple information access.

"As policymakers seek to develop the UK's approach to overseeing and regulating AI in health care, it will be important to create an environment where the use of AI is trusted by patients and the public," said Ahmed Binesmael, senior improvement analyst at the Health Foundation.

Ahmed Binesmael, Senior Improvement Analyst at the Health Foundation

Binesmael added that the public currently prioritizes stronger diligence and safeguards over potential benefits such as speed or availability, suggesting that regulators face pressure to prove safety before promoting convenience .

How Can Healthcare Leaders Build Trust in AI Systems?

  • Establish Clear Regulatory Frameworks: Multiple healthcare organizations, including the Royal College of Radiologists, the Institute of Physics and Engineering in Medicine, and the Society of Radiographers, have called for consistent regulation across AI developers, healthcare providers, and professionals to ensure safety-critical standards are met.
  • Invest in Workforce Training and Accountability: Experts emphasize the need for a properly trained and funded workforce with the capability and authority to assure AI systems in clinical practice, ensuring that humans remain responsible for algorithmic decisions.
  • Create Legal Protections for Patients: The Association of Personal Injury Lawyers has warned that current law lags behind AI healthcare advances, leaving injured patients with complex and costly product liability claims when faulty AI is involved in their treatment.
  • Ensure Transparent Communication: Healthcare leaders must openly discuss how AI will be used, what safeguards are in place, and what recourse patients have if something goes wrong.

Matthew Taylor, interim chief executive of the NHS Confederation and NHS Providers, acknowledged the potential of AI and digital tools to improve NHS productivity and give people control over their health information. However, he emphasized that there remains a critical need to "build trust" in their use .

Are Current Laws Protecting Patients Injured by Faulty AI?

One of the most pressing concerns is the legal gap. As AI becomes more embedded in clinical decision-making, patients injured by faulty algorithms face an uncertain path to justice. The current system forces them to pursue product liability claims against well-resourced manufacturers, sometimes based overseas, in proceedings that are notoriously complex, costly, and lengthy .

"The law is lagging behind when people are injured and AI technology is involved. AI use in healthcare is set to be transformative in providing rapid, accurate diagnosis and personalised treatment, so it's key that if patients are hurt due to negligence where AI plays a part that they have a clear and accessible route to redress through the courts in the UK," said Pauline Roberts, vice president of the Association of Personal Injury Lawyers.

Pauline Roberts, Vice President of the Association of Personal Injury Lawyers

The Association of Personal Injury Lawyers has submitted formal responses to the Medicines and Healthcare products Regulatory Agency (MHRA) calling for a new regulatory framework that addresses these gaps. The MHRA is developing recommendations through the National Commission, but no timeline has been announced for when those recommendations will be published .

What Does the UK's £1.6 Billion AI Investment Actually Cover?

In mid-February 2026, the UK government released its AI strategy for UK Research and Innovation (UKRI), committing £1.6 billion in direct funding to the AI sector over the next four years. This represents UKRI's largest single investment area for 2026 to 2030 .

The strategy includes specific allocations for healthcare applications. Up to £137 million will be delivered as part of the Department for Science, Innovation and Technology's AI for Science Strategy, backing AI-enabled scientific discovery with a focus on drug discovery and new treatments. The investment also supports expanded doctoral and fellowship routes co-designed with businesses, as well as recognized career frameworks for research software engineers, data scientists, and ethics specialists .

"From spotting cancers earlier to cutting backlogs in public services, new research into AI will be a game-changer, bringing the promise of tomorrow's technologies to the UK today," said deputy prime minister David Lammy.

David Lammy, Deputy Prime Minister

Despite this optimism from government officials, the polling data suggests that translating research investment into public adoption will require more than funding alone. Trust, transparency, and legal clarity must accompany technological advancement .

What Happens Next for AI Healthcare Regulation?

The regulatory landscape remains in flux. The MHRA's consultation on AI healthcare oversight has closed, but recommendations have not yet been published. When they do arrive, they will need to balance multiple competing priorities: enabling innovation, ensuring safety, protecting patients, and maintaining public confidence .

Mark Knight, president of the Institute of Physics and Engineering in Medicine, emphasized that AI must be regulated as a safety-critical technology, requiring clear standards across the entire AI lifecycle. This level of oversight will demand resources, expertise, and coordination across multiple organizations .

For now, the gap between government ambition and public trust remains wide. Closing it will require not just investment in technology, but investment in the regulatory infrastructure, legal frameworks, and transparent communication that can convince Britons that AI doctors are worth trusting with their health.