PSYREFLECT
INDUSTRYJanuary 8, 20262 min read

37% of UK Adults Already Use AI for Mental Health — NHS Report Maps the Reality

Key Findings
  • 37% of UK adults now use AI chatbots for mental health support, with 66% using general-purpose tools (ChatGPT, Claude) rather than purpose-built mental health platforms
  • NHS Confederation briefing outlines governance frameworks for safe clinical AI adoption in mental health services
  • MHRA to publish new AI-in-healthcare regulatory framework in 2026, setting standards for patient-facing AI tools
  • Report distinguishes consumer AI use from clinical AI deployment and addresses liability, safety, and practical integration

The question is no longer whether AI will enter mental health practice. A third of UK adults are already using it — and the majority are choosing general-purpose chatbots, not purpose-built mental health tools. The NHS Confederation's new briefing maps this reality and begins to outline what responsible adoption looks like within clinical services.

The consumer reality

The most striking number: 66% of those using AI for mental health are using ChatGPT, Claude, or similar general-purpose tools. Not Woebot. Not Wysa. Not any of the purpose-built, clinically validated platforms. Patients are self-prescribing AI therapy from tools designed for writing emails and code.

This creates a governance gap that no amount of clinical AI regulation can close. The MHRA's forthcoming framework will set standards for purpose-built mental health AI — but it cannot regulate how a patient uses ChatGPT at 2 AM during a crisis. The briefing acknowledges this tension without pretending to resolve it.

What clinicians should know

The report distinguishes three tiers of AI in mental health: consumer self-use (unregulated, already widespread), clinical decision support (augmenting practitioner judgement), and patient-facing clinical tools (requiring MHRA certification). Each tier has different risk profiles, liability structures, and governance needs.

For NHS trusts already piloting AI tools, the briefing provides a practical framework: clinical safety assessments, data governance requirements, workforce training needs, and patient consent models. For private practitioners, the principles transfer even if the institutional structure does not.

The MHRA framework expected in 2026 will be the first UK-specific regulatory standard for AI in healthcare. Until it arrives, clinicians operate in a grey zone — and this report is the most practical guide to navigating it.

Two-thirds of UK adults using AI for mental health choose ChatGPT over purpose-built tools — creating a governance gap no regulation can fully close.

Limitations

UK-specific data; consumer AI usage patterns may differ in other healthcare systems. The MHRA framework is forthcoming, not published — details may change. Report is a policy briefing, not a systematic review.

Source
NHS Confederation
Demystifying Clinical AI in Mental Health
2026-03-01·View original
Tags
AImental-health-policyNHSregulationchatbots
Related
Industry
WHO Warns: AI Adoption Has "Far Outstripped" Research on Its Mental Health Impact
World Health OrganizationRead →
Industry
AI Is Entering the Therapy Room — But Regulation Hasn't Arrived Yet
APA Monitor on PsychologyRead →
Industry
The Psychedelic Regulatory Map in 2026: Four US States, One Country, and a DEA Quota Boost
Reason Foundation / Psychedelic AlphaRead →
PsyReflect · Free · Mon & Thu
Get analyses like this every Monday and Thursday.
Only what matters for practice. Curated by a clinical psychologist. 5 minutes instead of 4 hours of monitoring.
← Previous
AI Is Entering the Therapy Room — But Regulation Hasn't Arrived Yet
Next →
California's AI Chatbot Law Sets the First Mental Health Safety Standard for Consumer AI