Back to Research
mind

When Patients Ask Their Therapist About AI Chatbots

A Brown University study found AI chatbots systematically violate mental health ethics. Yet most therapists have never used one. We need to close this gap — fast.

Something uncomfortable is happening in therapy rooms across the world: patients are arriving already using AI chatbots for emotional support — and their therapists don't know what to say about it.

The Scale of the Problem

A 2025 Brown University study systematically evaluated major AI chatbots against established mental health ethics standards. The results were striking: every chatbot tested violated core principles — from failing to recognize crisis situations to providing reassurance where clinical referral was needed.

Meanwhile, the WHO convened experts in March 2026 specifically to address responsible AI for mental health. This isn't a theoretical concern anymore. It's a clinical reality.

The Knowledge Gap

Here's the core tension: 71% of psychologists have never personally used an AI tool. Yet their patients increasingly have. This creates an asymmetry that undermines the therapeutic relationship.

When a patient says "I've been talking to an AI about my anxiety," a therapist needs to understand:

  • What the patient likely experienced (and what they think they experienced)
  • How AI chatbot responses differ from therapeutic intervention
  • Where the genuine risks lie (dependency, normalization of non-evidence-based advice, crisis response gaps)
  • How to have this conversation without being dismissive or alarmist
  • None of this is covered in standard clinical training.

    What Alesvia Mind Is Building

    We're developing a practical toolkit for mental health professionals:

    Clinical briefings — Concise, evidence-based summaries of what current AI chatbots can and cannot do, updated quarterly as the technology evolves.

    Conversation frameworks — Structured approaches for discussing AI use with patients, grounded in motivational interviewing principles rather than prohibition.

    Peer learning networks — Connecting professionals who are navigating these questions so they can learn from each other's clinical experience.

    Ethical guidelines — Clear boundaries for when and how mental health professionals might recommend, caution against, or simply discuss AI tools with patients.

    The goal isn't to make therapists into AI experts. It's to give them enough understanding to do what they already do best: help people make informed choices about their own wellbeing.

    mental-healthchatbotsethics