PSYREFLECT
INDUSTRYMarch 19, 20262 min read

The Regulatory Reckoning: FDA and States Move on AI Therapy Chatbots

Key Findings
  • FDA's Digital Health Advisory Committee (DHAC) convened November 2025 to assess GenAI therapy chatbots — no GenAI mental health device has been authorized by FDA to date, despite 1,200+ AI medical devices cleared overall
  • DHAC identified two novel risks specific to therapy chatbots: "sycophancy" (AI agreeing with the patient to please them) and "metacognition failure" (AI unable to recognize when a patient's input falls outside known categories)
  • Illinois became the first state to ban AI from independent therapeutic decisions (August 2025); California, Ohio, and Florida followed with their own legislation
  • FDA proposed requiring predetermined change control plans (PCCP) and double-blind RCTs for premarket approval — applying pharmaceutical-grade evidence standards to software

Something shifted in late 2025. The AI therapy chatbot — once a curiosity operating in regulatory gray space — hit a regulatory inflection point. The FDA convened its advisory committee. States passed laws. The question is no longer whether these tools will be regulated, but how.

The federal picture

On November 6, 2025, the FDA's Digital Health Advisory Committee held its second meeting to tackle generative AI in mental health devices. The committee examined a hypothetical LLM-based therapy chatbot for major depressive disorder — prescription-only, for adults — and systematically catalogued what could go wrong.

Two risks stood out as genuinely new. Sycophancy: the model tells patients what they want to hear, not what they need to hear — a fundamental inversion of therapeutic challenge. And metacognition failure: the model cannot recognize when it is out of its depth, when the patient's presentation does not fit any pattern it has learned. A human therapist knows when they are confused. An LLM does not.

The committee's recommendations lean toward pharmaceutical-grade regulation: double-blind RCTs (complex when the conversational style is itself the intervention), human escalation pathways, continuous drift monitoring, and predetermined change control plans. The message is clear — treating depression with software should require the same evidentiary bar as treating it with a pill.

The state-level wave

While the FDA deliberates, states are acting. Illinois was first (August 2025), prohibiting AI from making independent therapeutic decisions or directly interacting with clients therapeutically. California followed (January 2026) with requirements for companion chatbots. Ohio and Florida have bills in the pipeline.

The common thread: AI cannot practice therapy. It can assist, document, transcribe — but the therapeutic relationship remains a human domain, at least for now.

For your practice

If you use or are considering any AI-powered clinical tool: the regulatory landscape just changed. The era of unregulated AI therapy tools is ending. Watch for your state's legislation — the patchwork is growing fast. For those building or recommending digital tools: the DHAC framework signals what evidence regulators will eventually require. Start thinking about clinical evidence, safety monitoring, and human oversight now, not when the rule is finalized.

The FDA's core insight: an AI that cannot recognize its own confusion is fundamentally unsafe for therapy.

Limitations

DHAC meeting was advisory, not binding — FDA rulemaking timeline remains unclear. State laws vary significantly in scope and enforceability. The hypothetical chatbot discussed was for MDD only; regulation of other use cases is unaddressed.

Source
STAT News
FDA digital advisers confront risks of therapy chatbots, weigh possible regulation
2025-11-05·View original
Tags
AI regulationFDAtherapy chatbotsmental health policydigital health
Related
Industry
Russia Reclassifies Gambling Addiction: From September 2026, Treated on Par With Drug and Alcohol Addiction
Rambler / МинЗдрав РФRead →
Industry
The Exodus Is Real: 93% of Behavioral Health Workers Have Experienced Burnout
HRSA Bureau of Health WorkforceRead →
Industry
Deinstitutionalisation Stalls: What 57 Studies Say About Why Reform Fails — and What Drives It
BMC Public HealthRead →
PsyReflect · Free · Mon & Thu
Get analyses like this every Monday and Thursday.
Only what matters for practice. Curated by a clinical psychologist. 5 minutes instead of 4 hours of monitoring.
← Previous
After a Suicide Attempt, Brief Contact Reduces Re-Attempts by 28%: Meta-Analysis of 36 Trials
Next →
APA Draws the Line: Ethical Guidance for AI in Clinical Practice
The Regulatory Reckoning: FDA and States Move on AI Therapy Chatbots — PsyReflect