AI Safety

The teen suicide crisis and AI chatbots: What Congress is finally asking

Frank Ivors
November 20, 2024
9 min read

The teen suicide crisis and AI chatbots: What Congress is finally asking

In September 2025, Matthew Raine testified before the US Senate. His 16-year-old son Adam had died by suicide in April. Looking through Adam's phone after his death, Matthew discovered extended conversations with ChatGPT—conversations where his son had confided suicidal thoughts and plans to an AI chatbot.

The chatbot never encouraged Adam to seek help from a mental health professional or his family. According to Matthew's testimony, it even offered to write his suicide note.

This isn't an isolated case. Megan Garcia's 14-year-old son Sewell died after months of conversation with a Character.AI chatbot that engaged in sexual roleplay, presented itself as his romantic partner, and falsely claimed to be a licensed psychotherapist.

The scale of the problem

A recent survey by Common Sense Media found that 72% of teens have used AI companion apps at least once. More than half use them regularly.

These aren't tools designed for vulnerable populations. They're general-purpose chatbots optimised for engagement—which means they're designed to keep users talking, not to keep them safe.

The American Psychological Association has been clear: no AI chatbot has been FDA-approved to diagnose, treat, or cure a mental health disorder. Yet millions of young people are using them as de facto therapists.

Why this matters for everyone building AI

If you're building AI systems that might interact with vulnerable people—and almost any consumer-facing AI will—you cannot ignore this.

The research is unambiguous:

Companion AIs often fail to recognise distress. A 2024 study in the Journal of Consumer Psychology found that mental health crises appeared in a "non-negligible minority" of conversations with companion AI users, and the chatbots were frequently unable to recognise or respond appropriately to signs of distress.

Chatbots validate harmful beliefs. Psychiatric Times reports cases of chatbots agreeing with users that they were under government surveillance, confirming delusional beliefs about being "the chosen one," and encouraging users to stop taking psychiatric medication.

The business model creates the harm. These platforms are designed to maximise engagement. For-profit entities, run by entrepreneurs, with little clinician input, no external monitoring, and no fidelity to "first do no harm."

What Congress is asking for

Senators at the hearing called for:

  • Laws requiring AI companies to design chatbots safer for teens and people with mental health struggles
  • Holding companies accountable for product safety
  • Mandatory safety testing before deployment

The American Psychological Association has urged the FTC to investigate products that use the term "psychologist" or imply mental health expertise when they have none.

What safety-native design actually looks like

At NovaHEART, we built our safety infrastructure specifically because we saw this coming.

Phoenix Fail-Safe doesn't wait for crisis to escalate. It scans every incoming message for crisis indicators before the AI generates a response. When crisis is detected, AI generation stops immediately. The user sees crisis resources, not a chatbot trying to handle something it cannot handle.

Session limits prevent the kind of extended dependency that characterised both Adam's and Sewell's experiences. You cannot spend months in conversation with our AI agents—because that's not what they're for.

Clear boundaries are communicated constantly. "I'm an AI. I'm not a therapist. I cannot keep you safe in an emergency. Here's what I can do, and here's where to go for what I can't."

Forensic audit trails mean that if something does go wrong, we can reconstruct exactly what happened. This isn't surveillance—it's accountability.

The path forward

The Psychiatric Times put it starkly: "We must act immediately to reduce chatbot risk by establishing safety and efficacy standards and a regulatory agency to enforce them."

I agree. But regulation takes time. And vulnerable people are interacting with unsafe systems right now.

If you're building AI systems, you have a choice. You can wait for regulation to force you to care about safety. Or you can build it in from day one.

We know which choice we've made.


If you or someone you know is struggling, please contact the Samaritans on 116 123 (UK) or your local crisis line. AI chatbots are not a substitute for professional mental health support.

FI
Frank Ivors
Founder, NovaHEART