How banning social media for teens could backfire, and what works instead

An educator argues for algorithm accountability, digital literacy, and shared responsibility across systems
Harmful content can surface within minutes, raising questions on how platforms shape young users’ feeds
Harmful content can surface within minutes, raising questions on how platforms shape young users’ feeds(Representational Img: EdexLive Desk)
Updated on

By Yasmine Claire, DP Psychology, LS, Self-Taught, ToK, Secondary Learning Support Head at Stonehill International School, Bengaluru

When Goldilocks ate the porridge of the baby bear, she declared it was just right. Consider this wisdom from a fairy tale. Governments across the world are trying to get things just right to make social media usage safe for children and young people. Australia’s social media ban for people under 16 came into effect on 10th December 25. In the UK, the government is planning to run a trial social media ban on a volunteer sample of 300 teenagers. Closer home, Karnataka’s draft policy developed by Karnataka State Mental Health Authority (KSMHA) in collaboration with the Department of Health and Family Welfare and the National Institute of Mental Health and Neuro-sciences (NIMHANS) on restricting social media usage among teenagers has been completed.

However, bans create rebels, as they rightly should. Restrictions create workarounds. So how can we work at achieving effective use of social media and help children and young people understand problematic content? Social Media literacy and Parental involvement in their children’s social media usage are the familiar answers. And Digital Literacy becoming a part of school curricula is a reasonable and implementable way forward. No argument here.

Instead, let's talk about asking for accountability from social media companies. Let’s talk about algorithms. Designed to keep you scrolling, they pick up not just on what you might want to buy or what you might like to watch, but they also are designed to suggest content to you on what might interest you, based on reactions to content people of your demographic have reacted to, be it a ‘like’ or any other reaction. You, therefore, do not need to have shown a prior interest in the content you are shown. Extremist content: violence, sexual, misogynistic, is no longer a thumb scroll away; it has infiltrated your feed, like an occupying force.

Read ‘Three Minutes to Harm: Big Tech’s Little Victims’, where four profiles across social media platforms were created of regular 13-year-olds interested in what any 13-year-old is: music, sports, gaming, beauty, etc. All it took was three minutes. Just three minutes for harmful content to come into their feed. Not because the profiles engaged with the content, but because the algorithm decided the content was appropriate for a 13-year-old. And it was gendered, with boys being suggested violent misogynist content and girls being shown sexualized beauty content.

Research shows that 93% of teenagers globally have used at least one social media platform. Research also tells us that continuous use of social media keeps Dopamine pathways activated, increasing reward-based behaviors and increasing the risk of various addictions. With AI usage becoming ubiquitous, algorithms are more targeted, more personalized, and deep fakes have increased, and problematic mental health chatbots are replacing human therapists and more.

So, which regulatory mechanisms will actually be meaningful, implementable and accepted by the demographic we are concerned with? Tech companies do have systems to self-monitor content on their platforms: filtering harmful content, disinformation, extremist viewpoints, etc. The problem here is that this is also dependent on AI and algorithm-driven monitoring. Tech companies are also reluctant to regulate if the cost to them is a fall in profits. 

Everyone needs to be a part of reducing harm via social media.

●      Governments need to make major systemic changes and acknowledge that teenage mental health issues need to be addressed at a policy-making level. This applies to both their online and offline lives.

●      Tech companies need to self-regulate more and take clear ethical stances over mere profit-driven goals.

●      Communities: the family, the school and the wider social networks of teenagers need to reflect on their own limitations in understanding mental health irrespective of what the underlying cause is. They need to understand the importance of support systems: mental health professionals and other trusted adults.

●      Adults: members of the government, parents, teachers, and other professionals need digital literacy before we can meaningfully teach it to children. We need to understand, accept, and empathise with the social media-driven life that children and young people have.

This is why a blanket ban in itself will not work. It may show short-term results and may have very limited long-term impacts. Regulations need to be developed, implemented, and monitored across all stakeholders and not just children and young people.

We all want our porridge (or your dish of choice) to be just right. For that to happen, we have to do the hard work to make it happen. It starts with each one of us.

Views expressed are the author's own.

Yasmine Claire is an educator specialising in psychology and student support, currently serving as Secondary Learning Support Head at Stonehill International School, Bangalore. With a background in DP Psychology and Theory of Knowledge (ToK), she focuses on adolescent behaviour, digital environments, and the ways young people engage with learning and mental health in a rapidly evolving world.

logo
EdexLive
www.edexlive.com