Dangerous Products? Big Tech and the Online Child Sexual Exploitation Crisis

Dangerous Products?

Accusations reverberated through the hallowed halls of the Senate Judiciary Committee, as Senator Lindsey Graham, with an accusatory tone, declared, “You have blood on your hands,” directing his ire towards Meta CEO Mark Zuckerberg. In a dramatic twist, Zuckerberg, attempting to alleviate the atmosphere, offered a remorseful, “I’m sorry for everything you have all been through,” addressing the families of victims of online child abuse present at the hearing. These poignant exchanges marked a pivotal moment in a remarkable day of testimony, transcending the predictable script usually associated with such proceedings.

However, amidst the theatrics, the most striking revelation emerged not from the tech titans representing Meta, TikTok, X, Discord, or Snap but from Senator Graham’s opening statement: a bold assertion that current social media platforms, in their design and operation, constitute “dangerous products.” The complexity of this statement delves beyond the surface-level drama, prompting a deeper examination of the societal implications of these ubiquitous platforms.

These platforms inherently rely on cultivating vast user bases, particularly the youth. However, the scrutiny shifts towards these companies’ perceived need for more commitment and investment in adequately safeguarding their younger demographic.

New Generation of Users

In the wake of the pandemic, the surge in mobile device usage among children and teenagers has become an undeniable reality. According to a Harvard Chan School of Public Health study, the allure of social media for teens is indisputable, with a staggering 49.8 million users aged 17 and under on YouTube alone in 2022. Yet, as these platforms capitalize on the youth demographic, revenue statistics reveal a staggering $11 billion generated from users 17 and under in 2022, with Instagram leading the pack at nearly $5 billion, followed closely by TikTok and YouTube.

The risks posed to adolescents on social media platforms encompass a spectrum, ranging from cyberbullying and sexual exploitation to the promotion of eating disorders and suicidal ideation. To address these concerns, we advocate for a multifaceted approach centering on age verification, business model reassessment, and robust content moderation.

The interrogation of Meta CEO Mark Zuckerberg, spurred by Senator Josh Hawley, delved into the issue of age verification. The revelation that millions of underage users, those under 13, exist as an “open secret” within Meta underscores the urgent need for a stringent verification mechanism. While Meta suggests potential strategies such as identification requirements and AI-based age estimation, the opacity surrounding the accuracy of these methods raises concerns about their efficacy.

Social Media Relies on Underaged Users

The intersection of business strategies and adolescent user engagement reveals a disturbing underbelly. As uncovered by the Facebook Files investigation, Instagram’s growth strategy relies on teens facilitating the onboarding of family members, especially younger siblings, onto the platform. The purported prioritization of “meaningful social interaction” clashes with the platform’s allowance of pseudonymity and multiple accounts, complicating parental oversight.

The testimony of Arturo Bejar, a former senior engineer at Facebook, further unveils the issue’s magnitude. A survey conducted by Bejar indicated that 24% of 13- to 15-year-olds on Instagram reported receiving unwanted advances within the past week, representing what he termed as “likely the largest-scale sexual harassment of teens to have ever happened.” Meta’s subsequent restrictions on direct messaging for underage users, while a step forward, only scratches the surface of a pervasive issue.

Content Moderation and Age-Appropriate Experiences

Meta’s recent announcement of measures to provide “age-appropriate experiences,” including restrictions on specific search terms, indicates a reactive stance. However, the persistence of online communities promoting harmful behaviors necessitates a more proactive approach, with human moderators playing a pivotal role in enforcing terms of service.

The allure of artificial intelligence as a panacea for content moderation needs to be revised when confronted with the adaptability of online communities. Purposeful misspellings and the creation of backup accounts serve as loopholes, challenging the efficacy of AI-driven solutions. The industry-wide trend of massive layoffs in trust and safety operations since 2022 further underscores the limitations of relying solely on AI.

Conflicts of Interest and the Way Forward

Congress finds itself at a crossroads, with the need for comprehensive data from social media companies to determine the appropriate ratio of moderators to users. Drawing parallels with healthcare, we propose a duty to report when internal studies reveal potential threats to user safety. However, the challenge extends beyond reactive measures; it requires a fundamental reevaluation of the current social media landscape.

The dichotomy between tech companies’ revenue-driven approach and the imperative to protect the younger demographic unveils a glaring conflict of interest. The reluctance to segment users by age, a potential safeguard for children, aligns with these corporations’ revenue-centric motives. As AI accelerates targeted marketing, potential legislative tools, such as advertising transparency laws and “know your customer” rules, become crucial in reshaping the landscape.

Despite high-profile hearings on the perils of social media, Congress has yet to enact legislation safeguarding children or holding platforms liable for content. With a burgeoning online presence of young individuals post-pandemic, Congress must implement robust guardrails prioritizing privacy and community safety in the ever-evolving realm of social media design. The narrative calls for a nuanced understanding that transcends the simplistic dichotomy of good versus evil, urging legislators to navigate the intricate maze of social media intricacies with sophistication and foresight.

What Parents Can Do

  1. Review the Mobile Device and Internet Contract with your child.

80% of parents have never discussed Internet safety with their children. Talking to thousands of parents every year, I have learned parents are not having this important talk with their children because they do not know what to say. The number one safety factor in any child’s life is a parent that will speak to them about important, and sometimes tricky topics like Internet safety, bullying, drug use, vaping, etc. The Mobile Device and Internet Contract is a parent’s script to open a meaningful conversation about cyber safety with their child. Read each point to them, and then ask them to share their thoughts about it. Ask open-ended questions like, “Why do you think this is a good idea,” or “what could happen if you let a stranger into your Instagram account?”

  1. Filter the Internet content that is coming into your home.

Use a reliable content filter on the Internet coming and going from your home. I recommend using Cleanbrowsing.org. Your router may already have content filtering built into it. Make sure you are using something. Not only will you block inappropriate adult material from reaching your child’s device, but it will also block malicious websites that can infect your devices with viruses or malware.

  1. Turn on parental controls on all the Internet-connected devices your child is using.

For most parents, even parents who are IT professionals, this is a tall order. You can slog through Youtube videos, and Google searches on how to do this on your child’s devices. Unfortunately, parents who do this give up after an hour or two. I don’t want parents to give up. This step is too important. That is why I wrote my book, Parenting in the Digital World. It will walk you step-by-step through the process of turning on all the necessary parental controls on all of your kid’s devices, including their mobile devices, computers, and gaming consoles.

  1. Install a parental control and notification app on your child’s mobile device.

I have been using a great app on my children’s mobile phones called Bark. It is available on iPhone or Android devices. It is incredibly easy to use and helps me stay on top of my boys’ digital world wherever they are.

Bark proactively monitors text messages, YouTube, emails, and 24 different social networks for potential safety concerns, so busy parents can save time and gain peace of mind.

Use cybersafetycop in Bark’s coupon code to get a free one-week trial and 15% off your subscription forever.

  1. Get the Definitive Guide on Cyber Safety for Families, Parenting in the Digital World

Parenting in the Digital World is written by Clayton Cranford, the nation’s leading law enforcement educator on social media and online safety for children and recipient of the 2015 National Bullying Prevention Award.

This easy step-by-step guide will show parents how to create a safe environment on the Internet, social networking apps, and on their children’s favorite game consoles. Now available in Spanish.

  1. Go to a Free Parent Seminar Hosted at a Nearby School.

This seminar will change the way you look at your child’s digital world and give you a step-by-step game plan to make your child safe. We still have a few events scheduled this year. Check our Events Page to find a seminar near you.

  1. Bring the Cyber Safety Cop to your school to speak to your students (K-12) and your parents.

All of our presentations are webinar-ready.

If you would like to host a parent seminar or student assembly at your school, fill out the contact form here to learn more.

  1. Stay Informed by Subscribing to Our Membership Program.

Cyber safety is a moving target. We do the research for you. When a new app or online threat pops up, we will let you know about it. We also share parenting tips that will make a difference in your home. Subscribe here.

  1. Share This Article With Others

When your child goes over to their friend’s house, wouldn’t it be nice if those parents were practicing everything in this article? The more parents who are doing the things covered in this article, the safer all of our children are. Share this article on your Facebook page, your PTA newsletter, and in your parent groups.

*There are affiliate links throughout this post because we’ve tested and trust a small list of parental control solutions. Our work saves you time! If you decide that you agree with us, then we may earn a small commission, which does nothing to your price. Thank you! 

 

 

Table of Contents

About the Author

Picture of Clayton Cranford
Clayton Cranford

Clayton Cranford, the founder of Cyber Safety Cop and Total Safety Solutions LLC, served an impressive 20-year tenure in law enforcement.