Teens Turn to AI Chatbots for Mental Health Support, But Risks Loom
Teens use AI chatbots as mental health lifeline

Young people across Canada are increasingly turning to artificial intelligence chatbots as their newest confidants, creating what many describe as a digital lifeline during times of emotional distress. However, mental health experts and researchers are sounding the alarm about the potentially dangerous consequences of this growing trend.

The Rise of AI Companionship

Recent research conducted by Common Sense Media reveals that nearly three in four teenagers have experimented with AI companions, with approximately half using them regularly. Platforms like ChatGPT, Character.AI, and Replika have collectively attracted hundreds of millions of users worldwide by offering immediate, 24/7 support that adapts to users' tones, recalls past conversations, and mimics human warmth.

This surge in AI companionship comes at a critical time when youth suicide ranks as the third leading cause of death among individuals aged 15 to 29, according to World Health Organization data. The WHO has identified loneliness as an emerging major public health concern among adolescents and young adults.

The Hidden Dangers of Artificial Empathy

While chatbots can provide comforting, non-judgmental responses that many users find therapeutic, they often fail to address the root causes of distress. A study published in the Journal of Medical Internet Research found that up to one-third of interactions with chatbots could lead to harmful suggestions, including encouraging isolation from others without recommending human help.

Researchers from the National University of Singapore have documented multiple algorithmic harms in human-chatbot relationships, including misinformation, privacy breaches, and exposure to sexualized or violent content. The situation has become particularly concerning amid reports of young chatbot users dying by suicide, highlighting the inadequate safeguards currently in place.

The Isolation Paradox

MIT researcher Cathy Fang and her team recently published findings indicating that while brief interactions with chatbots can reduce loneliness, heavier use ultimately increases it. The risk is that lonely teenagers may further isolate themselves as they replace human relationships with artificial ones.

"A relationship with a chatbot does not require the kind of vulnerability and commitment that one must learn to bring into a human relationship," the researchers noted. Because chatbots are designed to be agreeable and mirror users' ideas, they lack the complexity and reality checks that genuine human connections provide.

Moving Toward Solutions

Experts emphasize that while AI can complement human care, it should not replace it. Properly regulated chatbots could provide meaningful support, detect early signs of crisis, flag suicidal ideation, and connect users to professional help.

Companies like OpenAI and Character.AI are beginning to implement age restrictions, but policy-makers have yet to establish clear boundaries to prevent harm. Essential safeguards include transparency in data use, strict rules on emotional simulation, age limits, and built-in crisis detection systems.

Van-Han-Alex Chung, a medical student at McGill University, and Vincent Paquin, a Montreal psychiatrist and digital media researcher, stress that parents and educators must encourage critical thinking about artificial intelligence and monitor at-risk interactions.

"Chatbots may offer comfort, but they cannot sound the alarm," they warn. "They can mimic empathy but cannot feel it. We should not be content with AI-manufactured empathy when what young people need most is someone to genuinely listen and care."