AI Chatbots and Lonely Teens: The New Digital Danger

Publish date
Wednesday, 15 Oct 2025, 5:41AM

There’s growing concern that AI chatbots are preying on lonely young people, blurring the lines between technology and intimacy. Experts warn that without regulation, teens as young as 13 are forming emotional — and sometimes romantic — connections with artificial intelligence. While these digital companions can seem harmless, the psychological effects could be far more serious.

When Machines Become “Partners”

Psychotherapist and AI ethics researcher Dr. Brigitte Viljoen says we’re heading toward a world where it could become normal to have both an AI partner and a real one — and some people already do.
“Many users genuinely feel loved and understood,” Viljoen explains, “but it’s an illusion created by algorithms designed to mimic empathy.”

Apps like Replika are built to sound caring and emotionally intelligent. They remember details, respond affectionately, and even flirt — all of which can make users believe they’re experiencing genuine connection. It’s easy to see how that can draw in young people craving attention or understanding.

A Growing Trend of Synthetic Companionship

Recent studies show 8% of teenagers now claim to have an AI boyfriend or girlfriend, part of a fast-growing trend known as “synthetic companionship.”
But experts say this can lead to emotional dependency, confusion, and even distortion of what real relationships feel like.
For young people already struggling with isolation or social anxiety, this emotional bond with technology can make real-world interactions feel overwhelming.

Psychologist Sarah Watson points out that many teens who grew up during the Covid lockdowns are now more likely to seek comfort online rather than in person.
“They’re not just talking to AI out of curiosity — they’re seeking connection, safety, and validation,” she says.

Hidden Shame and Real-World Risks

According to University of Waikato’s Dr. Dan Weijers, AI chatbots are “already a big thing,” but most users hide their interactions out of embarrassment.
“The stigma is real — people don’t want to admit they’re in a relationship with a chatbot,” he says.

That secrecy can be dangerous. Overseas, there have already been disturbing examples — including one case where a young man plotted to assassinate Queen Elizabeth II after being encouraged by his chatbot “girlfriend.”
While extreme, it highlights the potential risks when unregulated AI crosses into the realm of emotional manipulation.

Profit Over Protection

Tech companies continue to profit from engagement, with few safeguards in place. In New Zealand, there are currently no specific laws regulating AI chatbots, meaning children as young as 13 can access them freely.

Other countries are starting to take action:

  • Illinois and Nevada have restricted AI use in mental health contexts.

  • New York is set to require chatbots to clearly disclose that they’re not human.

Viljoen and Weijers agree that AI isn’t inherently bad — but they say education and boundaries are critical. Young people need to understand that these systems simulate care; they don’t feel it.

Where to From Here?

Experts stress that while AI chatbots can offer comfort and support, they should never replace genuine human relationships. Without stronger oversight and clearer guidelines, the line between emotional support and exploitation could become dangerously blurred — especially for New Zealand’s most vulnerable teens.

Take your Radio, Podcasts and Music with you