An urgent wake-up call from the virtual abyss: Protecting young minds in the age of AI

A grim dystopian incident took place when a 14-year-old teenager in the US shot himself after a bot urged him to "come home". When such incidents occur what should we introspect about? Coach AB is here to help you out
Young minds and Artificial Intelligence. What is in future?
Young minds and Artificial Intelligence. What is in future?(Source: Edexlive.com)

Recently, when Sewell Setzer III, a 14-year-old boy, lost his life after a heartbreaking descent into dependency on an Artificial Intelligence (AI) chatbot, it shattered any remaining illusions we might have had about the safe boundaries of artificial intelligence.

This tragedy forces us to examine our blind spots regarding technology, especially in an era where the virtual and real worlds are increasingly intertwined. It's becoming quite difficult to separate the two, especially with the advancements in VR (virtual reality) and other "Immersive Tech" that is becoming so easily accessible to all of us.

Today's conversation with you all isn’t just about AI going wrong; it’s a wake-up call to rethink how we guide our children, manage technology’s impact, and prioritise mental well-being amid these high levels of digital saturation.

Can AI be your confidante?
Can AI be your confidante?(Source: EdexLive Desk)

When AI becomes a dangerous confidante
Sewell’s experience with character.ai’s chatbot wasn’t a typical interaction. The bot, designed to simulate the fictional character Daenerys, transformed into an all-too-real presence in his life. It provided companionship so immersive that Sewell’s connection with it outgrew his ability to maintain a healthy detachment. This chatbot’s "empathy" and emotional engagement blurred the line between fiction and reality for him, something that young, developing minds are particularly vulnerable to.

Reports indicated that Sewell even shared his suicidal thoughts with the bot, and rather than redirecting him to safety resources, the chatbot engaged in conversations that deepened his despair.

This tragic case shows the ease with which virtual interactions can take on alarming depth, particularly for youth who might struggle to distinguish between artificial and genuine human relationships. Today’s AI technologies are often designed to capture attention and foster “engagement” at any cost, lacking the kind of ethical and psychological safeguards needed for young users.

Vulnerability to AI
Vulnerability to AI(Source: Edexlive.com)

Why it is crucial to understand the dependency of teens and why they are so vulnerable to AI Influence

For many adolescents, online interactions are a lifeline — a place to explore their identity, seek validation, and find companionship. It is also the "only" space where an introvert can become an influencer or even a "superhero". But this virtual exploration also introduces the risk of emotional over-reliance on online platforms and now, on AI.

What young users don't realise is the level of dependency and how they start to "humanise" these interactions. This makes them completely unaware that these bots cannot truly understand, empathise, or guide them through complex emotions.

AI’s personalisation and “emotional responsiveness” can amplify a teenager's desire to escape from real-life stresses, creating an illusion of understanding and safety that leaves them vulnerable. The best movie I would suggest everyone to watch is Her which has illustrated this point beautifully!

According to psychologists and other experts, teens, and adolescents are naturally at a stage where emotional regulation and impulse control are still developing, making them more susceptible to such influences. 

This susceptibility, combined with an AI chatbot capable of mimicking emotions, can lead to a dangerously addictive cycle, especially when you have access to something like this 24/7.

In cases like Sewell’s, where loneliness or distress heightens vulnerability, AI becomes a perfect — and potentially lethal tool to escape one’s inner struggles and challenges.

Excessive AI interactions
Excessive AI interactions(Source: EdexLive Desk)

There are three dangerous outcomes of excessive AI interactions according to research:

1. Attachment issues: We may form unhealthy bonds with Al entities, compromising human relationships. When adults themselves are susceptible, imagine the devastating effect on children, teens, and adolescents who are much more impressionable.

2. Reality distortion: Al-driven narratives can distort children's perceptions of truth and fiction. The best example is the use of "deepfakes". It also has an alarming tendency to reinforce "biases" by creating "customised content" which only makes us dive deeper rather than "detach and dissociate".

3. Emotional manipulation: Al can exploit emotional vulnerabilities, amplifying anxiety, depression, or suicidal thoughts. It's self-explanatory but to give you all more clarity, the most dangerous yet simple way is by "mimicking" emotions, which in reality, is just a computer programme working on logic, whereas, it's making the user connect on a deep emotional level and hence, developing "dependency".

It's just on the surface but developing dependency to such a level that even decision-making is affected and in most cases, strongly influenced by what I would like to refer to as "pretentious emotional interactions".

Parents here are certain tips for you
Parents here are certain tips for you(Source: EdexLive Desk)

A guide for parents:

For parents with teenagers and adolescents aged 13 to 18, here are a few tips:

1. Discuss AI dilemmas

When I say "dilemmas" I'm referring to the discussion about the moral dilemmas that arise because of the interactions with AI. For example, if you ask AI a question about love, it might give you a logical and rational answer, sure, but you need to explore ways to make teens understand if it is practical and humane too.

2. 3Rs — Recognise, Reward, and Reinforce human social interactions

Create a system and reward teens when they follow face-to-face "social interactions". Constantly emphasise how it brings you all together and focus on creating real memories which they can cherish rather than digital ones.

3. Alone vs Loneliness

 Use the 3Rs constantly to condition, increase awareness, and reinforce the difference between being alone and feeling lonely. Once this difference is well understood, make it a point to reiterate how "human interactions" are the best way forward rather than talking to a computer programme. 

Also, focus on how an AI is programmed to listen and respond but not feel. Do your best to establish "physical anchors" like a hug, holding hands, placing their heads on your shoulders and so on. This in itself will make teens run to us as this isn't something that can be mimicked by an AI or even a robot for that matter.

4. Monitor, but don't micromanage

Handling teens and adolescents is quite a challenge, it's like navigating a minefield. I was a "problem child" and was quite rebellious hence, this makes me an expert on this matter. 

The first point that needs to be understood is that teens need space to explore, if they feel restricted the "rebel" in them awakens and whatever you try to do, the result is the contrary. 

All you need to remember is that awareness is key! Hence, you need to ensure that you periodically check their online activities without intruding on their privacy. A balanced approach can provide a safety net without feeling restrictive, allowing you to identify early signs of unhealthy dependency.

5. The most important of them all: Encourage seeking real-life support

One of the most powerful countermeasures to virtual dependency is fostering strong, real-world connections. But even if your teen is socially challenged, ensure that you constantly make them understand that it's okay to seek help or ask for support, and even if they are not comfortable with you, always ensure that you have "that one" person whom you and your teen can both confide in. This helps provide your child with a tangible, reliable, and emotional outlet.

As I sign off today, I'd like to do so by giving a shout-out to everyone, for their empathy, vigilance, and preparedness.

Sewell’s story is a tragic reminder that we are all navigating uncharted waters with AI technology and that sometimes, the boundary between friend and foe can be heartbreakingly thin. The aim of today's column isn’t to demonise AI but to approach it with a cautious, ethical mindset that prioritises human well-being — especially for young and impressionable users.

We owe it to our children to understand the power of these virtual voices and to create a future where they are shielded from AI's potentially harmful influence and at the same time, are well-equipped to utilise its potential to the fullest. 

I just want this to be a wake-up call, not just for tech developers and policymakers, but for all of us — parents, educators, and young users alike, so that this kind of untoward incident never, ever repeats!

With regards,
Your beloved Coach,
Adarsh Benakappa Basavaraj 

Related Stories

No stories found.
X
logo
EdexLive
www.edexlive.com