Regulation is a Start, but Empowering Parents is the Real Solution

<< All Resources
Share:

The recent work by lawmakers like Michigan State Senator Mallory McMorrow is a major step forward for kids’ online safety. By challenging “Big Tech” on addictive designs and new AI tools, they are doing something vital: they are raising awareness and starting a conversation that is long overdue.

As someone who has spent years developing technology to protect children, I believe we have a unique opportunity to turn this momentum into real, lasting protection. While it is tempting to focus primarily on automatic “off switches” provided by platforms, our experience shows that to truly protect children, we must direct our legislative focus toward two critical areas:

1. Keeping Kids in Visible Digital Spaces When regulation focuses only on blocking or over-restricting mainstream platforms, we risk an unintended consequence: driving children into “darker,” unmonitored corners of the web. Kids are digital natives; if they feel disconnected, they will find alternatives. Our goal should be to keep children in digital spaces where we have the tools to monitor and supervise them, rather than pushing them toward hidden apps that are beyond any parental reach.

2. Moving from Automatic Rules to Active Involvement Relying on platforms to provide “automatic” safety features is a good first step, but it is rarely enough. Tech-savvy kids can often find ways around these built-in restrictions. Real, long-term safety comes from active parental involvement. Legislation has the power to move beyond simple mandates for tech companies and instead create an environment where parents are equipped to lead the way.

The Technical Priority: Bridging the “Onboarding” Gap

To truly empower parents, we need to address the technical hurdles they face every day on the devices their children use. While social media often takes the spotlight, the platform gatekeepers, Google and Apple, hold the keys to making safety tools accessible and effective.

  • Android: Currently, safety apps require a long and complex list of permissions. This creates a difficult “onboarding” process. Many parents feel overwhelmed by technical warning messages and give up, leaving their children without protection.
  • Apple (iOS): On the other side, Apple’s restrictive environment often blocks the very APIs that safety services need to help parents monitor activity. While privacy is a core value we all share, it should not be a barrier that prevents parents from keeping their children safe.

A Path Forward for Lawmakers

The most significant impact a law can have is not just penalizing platforms, but ensuring that the infrastructure exists for parents to stay involved. We encourage regulators to focus on:

  • Infrastructure for Safety: Requiring Google and Apple to provide a simple, clear, and verified setup process for legitimate child safety services.
  • Support for Monitoring: Ensuring that safety tools can provide parents with the context they need to have honest, open conversations with their kids.
  • Simplifying the Choice: Making it easy for a parent, regardless of their technical skill, to install and manage protection.

We don’t need the platforms to raise our children for us. We need the legislation to ensure we have the tools to raise them ourselves in a digital world.

Digital Parenting, online child safety, regulation

The School’s New Paradox: Legally Accountable for Bullying, But Blind to the Data

<< All Resources
Share:

A recent court ruling in Spain is sending shockwaves through educational institutions across Europe, and it is only a matter of time before the impact is felt worldwide. A school was ordered to pay significant compensation to a student for severe bullying, much of which took place on social media, entirely outside of school hours and off-campus.

The message is unmistakable: Schools are now being held legally and financially responsible for a problem they cannot see.

The Core Conflict: Responsibility Without Visibility

School leaders are being pushed into an impossible position.

  • The Law: Increasingly demands that schools prevent and respond to digital harm within their student community.
  • Privacy Regulations: Strictly prohibit schools from accessing students’ private interactions on WhatsApp, TikTok, or Instagram.

This creates a high-stakes paradox: How can a school mitigate a digital crisis that it is legally forbidden to monitor?

The Solution: Privacy by Design, Safety by Context

At PureSight , we believe the answer isn’t to turn schools into surveillance hubs, but to solve the ultimate digital parenting dilemma: How do you protect a child’s safety without violating their right to privacy?

We built Surfie to bridge this gap. Our AI engine doesn’t expose a child’s entire digital life or every private conversation to their parents. Instead, it respects the child’s privacy by working silently in the background, only “raising a flag” when it identifies high-risk events that demand adult intervention.

By analyzing context and emotional tone rather than just flagging keywords, Surfie provides parents with actionable awareness regarding:

  • Cyberbullying & Harassment: Distinguishing between friendly banter and systemic, harmful shaming.
  • Predatory Grooming: Detecting sophisticated approaches by pedophiles early in the cycle.
  • Dangerous Viral Trends: Identifying life-threatening social media challenges before they escalate.

This model preserves the child’s digital autonomy while ensuring that when a real threat emerges, the parents are informed and empowered to respond according to their own judgment and values.

The “Two-Family Rule”: Small Action, Community Impact

Our data reveals a fascinating insight: You don’t need an entire classroom to adopt safety tools to change the culture.

Two proactive families are often enough. When even one parent receives an early warning and intervenes, by talking to their child or reaching out to another parent, the cycle of escalation is broken. That single intervention disrupts the toxic dynamic for the entire class. Digital awareness creates real-world stability.

My Advice to School Leaders

The legal pressure is mounting, and the Spanish case proves that “it happened online” is no longer a valid defense. However, the answer is not for educators to become investigators. Instead:

  1. Empower Parents: Provide them with the tools to be the “digital first responders” for their own children.
  2. Maintain Clear Boundaries: Keep the school out of private messages to avoid legal, ethical, and privacy conflicts.
  3. Prevent Litigation: Build a community-level shield that stops bullying before it escalates to a headline or a lawsuit.

It’s time to modernize school safety. By giving parents the visibility they need without compromising student privacy, we protect the children, the parents, and the institutions that serve them.

Digital Parenting, safe internet use, safe social networking, social media

Knowing Your Child in the Physical World Is Only Knowing Half of Your Child

<< All Resources
Share:

I recently watched a powerful interview on the Shawn Ryan Show with Elizabeth Phillips, and it stayed with me.

Elizabeth is the founder of No More Victims and Executive Director of the Phillips Foundation. Her mission is personal: her younger brother, Trey, died by suicide in 2019 after childhood sexual abuse at Kanakuk Kamps. The abuse was hidden for years because of a strict NDA.

When someone with her experience speaks, every parent should listen.

How Predators Use “Innocent” Apps Like Venmo

Elizabeth explained something every parent needs to understand:

Predators no longer hide in dark corners of the internet. They use the same apps our kids use every day.

Venmo is one example.

Teens use it to split bills or get paid for babysitting. But Venmo mixes a digital wallet with a social feed, and that gives predators an easy way in.

Here’s how the grooming often starts:

  • 👉 A small payment from a stranger
  • 👉 A friendly compliment about the teen’s profile photo : “You look like you could be a model.”

If the teen replies, the predator pushes further.

  • 👉 A bigger payment
  • 👉 A request for a “more personal” photo

Not sexual at first. Just “exclusive.”

But the moment that photo is sent, the tone changes. Flattery becomes pressure. Requests become threats.

This is how Sextortion starts, blackmail that traps kids in fear and shame.

Silence Makes Everything Worse

Elizabeth’s story about Trey teaches an important lesson:

Abuse grows in the dark.

Whether it’s hidden by an NDA or hidden inside an app, the result is the same: The child feels alone.

And this is the reality of modern parenting.

We would never let our child walk alone into a dangerous neighborhood. But when we give them a smartphone, we unknowingly open a direct channel to anyone, good or bad.

Many apps are too complex for parents to set up safely. And kids end up navigating the digital world on their own.

The Couch Illusion

A child can be sitting right next to us on the couch… and still be facing a serious threat online.

We often have no idea:

  • Who they talk to
  • What content they see
  • Whether someone is trying to groom them

Being in the same room is no longer enough.

Parents Don’t Need to Spy – They Need Visibility

Parents today face challenges our parents never had.

We don’t want to invade our child’s privacy. We just want to know when something is wrong, so we can step in.

This is why, in PureSight , we built Surfie.

Not as a firewall. But as a tool that gives parents the visibility they need:

  • ✅ Detects grooming, sextortion, and bullying in real time
  • ✅ Alerts parents when kids are exposed to harmful content
  • ✅ Shows which apps they use and if they are age-appropriate

Surfie doesn’t replace parenting. It supports it.

Breaking the Silence Together

Elizabeth Phillips is working hard to change laws and stop the silence that destroyed her brother. As parents, builders, and leaders, we need to make sure that same silence doesn’t happen in our homes.

Our children live in two worlds today:

  • 🌍 The physical world
  • 💻 The digital world

To protect them, we must be present in both.

Let’s make sure no child faces these dangers alone.

cyberbulllying, Cyberbullying, Digital Parenting, prevention, safe internet use, social media

Online Grooming Doesn’t Start With Danger, It Starts With Conversation

<< All Resources
Share:

Recently,Lori Fullbright, News On 6 Anchor & Crime Reporter, shared a clear and important breakdown of the actual scripts predators use during online grooming, how they initiate contact with children on social platforms, how trust is slowly built, how manipulation escalates, and where it can ultimately lead.

What stood out most is something many parents, and even professionals, underestimate:

Grooming is a process, not an event. It doesn’t happen in a single message. It can take weeks or months for the situation to become visibly dangerous.

That delay is exactly what makes it so effective, and so hard to detect.

Here’s the uncomfortable reality of modern parenting:

The moment we give a child a smartphone and access to social platforms, we’ve effectively createda virtual front door with a public sign.

Strangers don’t need to pass through our home. They don’t need to meet us. They can start a direct, private conversation with our childwhile the child is sitting right next to us on the couch.

This invisible access gap is one of the defining challenges of parenting in the digital age.

And it’s precisely the problem that led us to build Surfie.

Surfie was designed around a simple but powerful idea: Parents can’t protect what they can’t see.

By monitoring conversational patterns across social platforms and identifying early signs of grooming and manipulation, Surfie helps parents know when awareness is enough and when intervention is required.

Yes, delaying young children’s entry into social platforms is ideal. But it’s no longer sufficient.

Social platforms are embedded in daily life. Which means parenting itself must evolve , from supervision based on proximity, to protection based on insight.

Thought leadership today isn’t about asking whether kids should be online. It’s about asking whether we’re giving parents the tools they need once they are.

🔔 Call to Action – Responsibility Looks Different for Each of Us

This conversation matters, but the responsibility looks different for each of us:

👨👩👧 Parents

  • 👉 Start conversations early, before the first red flag appears
  • 👉 Stay involved beyond screen-time rules
  • 👉 Don’t assume physical proximity equals digital safety

🏫 Educators

  • 👉 Align regulation with how children actually communicate today
  • 👉 Shift focus from reaction to prevention

🛡️Insurers

  • 👉 Treat online harm as a preventable risk, not just a claim
  • 👉 Invest in early-detection and family-protection models

🧠 Platform Leaders

  • 👉 Design child safety as a default, not an add-on
  • 👉 Detect grooming patterns early, before escalation

Let’s stop reacting late and start preventing early.

Because online safety doesn’t fail suddenly.

It fails slowly, when no one is watching.

Digital Parenting, grooming, Online predators, safe internet use, Sexting, social media

The Illusion of Safety: Why Parental Intuition is No Longer Enough in the Digital Age

<< All Resources
Share:

I recently came across a disturbing news report that every parent needs to read. In Israel, a mother’s intuition saved her 10-year-old son from a predatory situation involving his own teacher. The mother noticed messages arriving at odd hours, spotted behavioral changes in her child, and decided to investigate. She discovered a stream of inappropriate, intimate messages sent by an authority figure the child trusted.

This mother saved her child, and she deserves credit. But this story is scary. It proves that keeping kids safe today is very difficult.

The “Living Room” Paradox

We live in a time where the concept of “safety” has fundamentally shifted. In the past, if our children were sitting on the sofa next to us, we knew they were safe. Today, that physical proximity is an illusion.

A child can be sitting two feet away from their parents, safely inside their home, yet be virtually transported to a dangerous environment. They could be dealing with cyberbullying, exposed to toxic trends, or, as in the recent news story, being groomed by a predator.

The scary reality is that for every parent who manages to catch that one suspicious message in time, there are countless others who might miss it. And it isn’t their fault.

The Data Overload Challenge

Let’s be realistic about the digital landscape our children inhabit. The average child receives and sends hundreds of messages a day across WhatsApp, Instagram, TikTok, and gaming chats.

Expecting a parent to manually read through every single line of text to find a potential threat is not only invasive to the child’s privacy, but it is also practically impossible. It is like trying to find a specific grain of sand on a beach.

If we rely solely on manual checks or “lucky” intuition, we are leaving our children’s safety to chance.

Bridging the Gap with AI

This specific challenge is the driving force behind PureSight and our solution, Surfie. We realized early on that parents don’t need to see everything; they need to see what matters.

We developed Surfie to act as a smart digital filter. Using advanced Artificial Intelligence, the system monitors social platforms and messaging apps, but it doesn’t just record text, it analyzes context.

  1. Event Detection: The AI is trained to recognize the distinct patterns of cyberbullying, predatory grooming (pedophiles), and distress.
  2. Contextual Awareness: It distinguishes between friendly banter and dangerous interactions.
  3. Real-Time Alerts: Instead of handing the parent a transcript of 500 messages, Surfie stays silent until a red line is crossed.

How Technology Could Have Changed the Narrative

Returning to the story of the 10-year-old boy: Had a solution like Surfie been active on his device, the outcome wouldn’t have depended on the mother noticing a late-night notification.

The system would have analyzed the content of the conversation. It would have flagged the inappropriate language and the suspicious nature of the dialogue coming from an adult figure. The mother would have received an immediate alert on her phone, showing her the specific problematic exchange, allowing her to intervene instantly, perhaps even weeks before she eventually did.

Moving From Reaction to Prevention

As parents, we provide our children with smartphones to keep them connected and safe. Yet, without the right tools, those same devices open a door we cannot easily close.

We need to normalize the use of digital parenting tools not as “spying,” but as essential safety gear, like a seatbelt or a bicycle helmet. Solutions like Surfie are designed to respect the child’s privacy while ensuring that when a threat arises, the parent is the first to know.

We cannot control the internet, but we can control how we equip ourselves to handle it. Let’s take the burden of “luck” out of the equation.

cyberbulllying, Digital Parenting, online child safety, prevention, regulation, safe internet use

The iPhone vs. Android Debate: It’s Not Just About Preference, It’s About Our Children’s Safety

<< All Resources
Share:

At the Cohen household, there is a long-running debate about the “supreme” mobile operating system. My wife is a devout iPhone user, while I am firmly in the Android camp. Neither of us has successfully converted the other.

These differences are reinforced daily by the distinct User Interfaces of each platform. I find it just as difficult to operate her device as she does mine. It’s a friendly disagreement, and truthfully, we manage to live in peace despite this technological divide.

However, when we shift the context from adults to children, the differences between these two platforms stop being a matter of taste and start being a matter of safety.

As the CEO of PureSight , where we specialize in protecting children online, I see a fundamental contrast in philosophy between the two giants:

  • Apple prioritizes a “walled garden” approach to privacy. While noble in theory, this policy severely limits third-party applications’ ability to provide robust child protection services.
  • Google, on the other hand, adopts a policy that balances privacy with parental choice. They have established strict guidelines: apps can request monitoring permissions, but they must transparently explain what is being accessed and why. If a parent chooses to grant those permissions to protect their child, the OS allows it.

The Result: On an Android device, we can run child protection services that are significantly more effective, practical, and deep than what is possible on an iPhone. The proof is in the market – virtually all dedicated “safe phones” for kids available today are built on the Android platform, not iOS.

The Social Pressure vs. Online Child Safety

This creates a massive challenge for parents, particularly in markets like the US where the iPhone is a status symbol. Parents face immense pressure to provide their children with iPhones to avoid social exclusion (the “Green Bubble” stigma). Yet, by doing so, they inadvertently back themselves into a corner with very limited tools to monitor and protect their children in the digital social sphere.

A Regulatory Blind Spot

Current regulatory discussions on child safety are heavily focused on blocking access to social platforms. While well-intentioned, I believe this misses a crucial opportunity.

Instead of just trying to ban usage, regulators should demand that OS providers (specifically Apple) open up their APIs to legitimate child safety vendors. We need the ability to monitor and protect children on the device level – capabilities that the OS providers themselves are not fully offering.

The Privacy Paradox: The Life360 Example

Critics often cite strict privacy as the reason for locking down devices. But do parents actually prefer total privacy over safety?

Look at Life360, a location service with over 90 million users, mostly families. As a public company, they have disclosed that the data collected from their free-tier users (about 97% of their base) is sold to third parties to fund the operation. Despite the known trade-off between privacy and utility, millions of families use it daily.

The lesson? Parents are willing to share data if it means keeping their children safe.

Driving Change: The “First Phone” Opportunity

Let’s be realistic: attempting to switch a teenager from an iPhone to an Android is virtually a “mission impossible” due to social dynamics. However, parents hold the power when purchasing the very first smartphone, typically around age 10.

This is the precise moment when children take their first steps into the digital world, a critical stage where deep parental involvement and guidance are essential, not optional. Therefore, I strongly recommend utilizing this window of opportunity to ensure their first device is Android-based. It is the most effective way to guarantee you have the necessary tools to guide and protect them during these formative years.

Let’s prioritize safety over status.

Digital Parenting, online child safety, safe internet use, social media

Jonathan Haidt, Pac-Man, and What Parents Are Missing

<< All Resources
Share:

Like many parents, I recently received a message from my wife with a link to a podcast by Jonathan Haidt. She sent it with a note of deep concern about how the digital world is affecting our young daughters.

It was ironic. Why? Because I am the CEO of PureSight, a company that builds tools to help parents navigate exactly these challenges.

Haidt himself notes that most of his book’s buyers are mothers. They are often the first to spot these behavioral changes in children and bring this critical discussion to the family table.

The “Kids These Days” Trap

I agree with Haidt on one fundamental point: our kids are behaving differently because of screens and social media. These are challenges that previous generations never faced.

However, I believe his analysis is missing something.

Older generations always complain about “the youth of today.” I am Gen X, and there is a famous joke about my generation:

“If Pac-Man had affected us as kids, we’d all be running around in dark rooms, munching pills and listening to repetitive electronic music.”

Technically? Maybe it was true. We played video games, and later we went to dark clubs. But in the end, we turned out okay.

I believe our kids will be okay too. The digital world gives them amazing advantages we never had. Yes, there are new challenges, but we need to adapt, not panic.

Don’t Blame the Government, Empower the Parents

The biggest piece missing from Haidt’s view is the role of the family.

He focuses heavily on tech giants and asks the government for more regulations. He implies that parents are helpless against these companies.

But history shows that bans don’t really work. When Facebook required users to be 13+, it didn’t stop children. It just taught them to lie about their age to open an account.

Guidance over Bans

I believe it is safer for us to know where our kids are online. If we simply ban platforms, children will move to “underground” apps where we cannot help them.

Our job as parents is to educate, guide, and protect, just like we teach them to cross a busy street.

Digital life is here to stay. Our kids are not ready to face it alone; they need our compass. I believe in a balanced approach:

  1. Allow them to enter the digital world.
  2. Equip parents with tools to monitor activity and get alerts if the kids encounter dangerous content.

Looking to the Future

Serious incidents do happen online, and we must remain alert. But we should not try to turn back time.

I am confident that in a few years, we will look at this generation’s achievements with pride. And inevitably, they will grow up to stress about the changes facing their own children. 😊

online child safety, parenting, safe internet use

Why “Parental Control” Is No Longer Enough – And Why We Must Shift to Online Child Safety

<< All Resources
Share:

In marketing, we know that when a product enters a new market first, its name often becomes the name of the entire category. Think of Zoom, it became the generic term for video conferencing, even when people were actually using Teams, Google Meet, or another platform.

In the world of child protection online, a similar thing happened. For many years, the category has been known as “Parental Control.” It’s a term born in the early days of the industry, when solutions focused mainly on web content filtering.

But after more than a decade in this field, and as a father of four (not so little) kids. I’ve never truly connected to the idea that a parent’s role is to control their children.

Our children are not robots. And I don’t believe that controlling them is the goal. Our role as parents is to educate, guide, and protect, while helping them gradually grow into independent, responsible digital citizens.

We Chose a New Term: “Online Child Safety Service”

At PureSight, we have chosen to move away from the old terminology. We refer to our solution as an Online Child Safety Service, because it reflects what modern families actually need today:

  • Not control.
  • Not restriction for the sake of restriction.
  • But involvement, awareness, and timely guidance.

The digital world has changed dramatically. If once the main risk was inappropriate websites, today the challenge is very different:

  • 📱 Kids spend far less time “browsing the internet.”
  • 📲 And significantly more time inside dedicated social, gaming, and messaging platforms.

This shift created an entirely new reality for parents.

The New Parenting Challenge

Every parent knows this moment:

You’re sitting in the living room with your child. They are next to you, holding a smartphone. Yet you have no idea:

  • Who they are talking to
  • What content they’re seeing
  • What conversations they are involved in
  • Or what is happening inside those apps

This lack of parental visibility is not a small issue. It removes a parent’s ability to guide, support, and protect. And that is a fundamental problem.

Regulation Is Coming – But Often Focused on Yesterday’s Problems

Across the world, more governments are realizing their responsibility to protect children online. This is encouraging, but many of these regulatory efforts still focus on yesterday’s challenges:

  • Traditional content filtering
  • Age-based blocking of entire platforms
  • Attempts to isolate children from digital life altogether

But as I’ve said before: I don’t believe full isolation is the answer.

Social platforms are the “digital roads” of our time. Just like real roads, we can’t keep children away from them forever.

  • We don’t ban kids from crossing the street.
  • We teachthem how to cross safely.
  • We hold their handwhen they’re young.
  • And gradually, as they mature, they learn to navigate it on their own.

The digital world demands the same approach.

Modern Child Safety Must Focus on Social Platforms

To truly protect children today, safety solutions must be able to:

     ✔️ Monitor online interactions in social platforms
     ✔️ Detect risks early
     ✔️ Alert parents when intervention is needed

Because the real threats today are:

  • Cyberbullying
  • Predators initiating contact with children
  • Harmful content and dangerous trends
  • Emotional pressure or manipulation
  • Exposure to age-inappropriate material

Parents don’t need to “control” their kids.

  • They need awareness.
  • They need timely information.
  • They need the ability to remain involved, without intruding, and without breaking trust.

Our Mission at PureSight

At PureSight , this has been our mission from day one:

To empower parents with the right insights at the right time , so they can protect, guide, and support their children in the digital world.

Not through control. But through smart, AI-driven, respectful, and age-appropriate guidance.

As the digital world continues to evolve, so must the tools and language we use to keep our children safe.

And it starts by letting go of old terminology, and embracing the real challenge of our time: Online Child Safety.

If you’d like to explore how we support millions of families worldwide with AI-powered child protection, I’d be happy to connect.

 

Royi Cohen

CEO @ PureSight | Global expert on Online Child Safety, developing platforms and services for the global market.
Cyberbullying, Digital Parenting, online child safety, Online predators, safe internet use

When One Video Moves a Country: 10 Days from Viral Video to Law – What’s Next?

<< All Resources
Share:

In the last week and a half, Brazil moved fast: a huge public debate about “adultização” (pushing kids to act like adults online), arrests, and a bill pushed forward quickly. This shows the topic is urgent, but we also need to act wisely, not only fast.

What happened – 10-day timeline:

  • August 6 – YouTuber Felca posts a long video on “adultização.” It goes viral and starts a national conversation.
  • August 11 – Reports say there are active investigations and that some accounts were removed. The story leads the news.
  • August 15 – Influencer Hytalo Santos and his spouse are arrested as part of cases tied to harm and exploitation of minors.
  • August 19 – The Chamber of Deputies approves fast-track status for a child online-safety bill.
  • August 20–21 – The main draft passes in the Chamber, and the bill goes back to the Senate for more debate.

What we do need: smart rules + one global device standard

We should build an open, global standard for child safety on devices, made by regulators, operating-system makers (iOS/Android/Windows/macOS/ChromeOS), device makers, platforms, and child-safety companies. Key ideas:

  • One clear approach across apps and platforms: not a patchwork where every app has different settings and parents must search in each one to protect their kids.
  • Built-in OS tools for child-risk and wellbeing signals (chat, media, location, screen time), with parent choice to turn them on and to turn them off.
  • Parent-approved oversight by trusted child-safety services: the standard should let certified services, with the parent’s permission, watch for risk signals across apps/devices and raise quick alerts when a child needs help or attention.
  • Privacy by design: do as much as possible on the device, keep only the data you need, use strong encryption, and store data safely (on device and/or secure cloud).
  • Clear and checkable: exportable logs, strong security rules, and independent labs to test and certify.
  • Works well with others: a shared way to handle key features (filtering, risk signals, parental controls) so parents can switch providers without losing basic functions.
  • Right duties for platforms (age checks, exposure limits, reporting paths), aligned with the device layer so nothing falls through the cracks.
  • Simple success metrics: fewer cases of harm, faster response times, and better alerts (fewer false alarms and fewer missed cases), plus real gains in child wellbeing.

Bottom line

Brazil’s wake-up call is important, and it’s a chance to do better. Let’s turn this energy into a single, comprehensive standard on devices plus balanced rules for platforms. We can truly protect kids, without putting impossible pressure on parents and without killing innovation. One clear standard, plus parent-approved oversight by trusted safety services, can make it practical to spot risks and alert parents when they need to step in.

I’d love to hear your thoughts: what principles would you add to such a standard?

 

Royi Cohen

CEO @ PureSight | Global expert on Online Child Safety, developing platforms and services for the global market.
regulation, Sexting, social media

Child Safety Online: Why Bans Aren’t Enough

<< All Resources
Share:

Last week, Florida Governor Ron DeSantis signed a new law. It bans social media accounts for children under 14 and requires parental permission for teenagers aged 14 and 15.

On one hand, it is good to see governments worldwide realizing they must act to protect children. Old laws that simply filter content are not enough anymore because children today spend more time using apps than browsing websites.

However, we must ask: Will this law actually work?

The Limit of Bans

There is already a law in the USA banning social media for children under 13, but it is not enforced well. Kids still find ways to create accounts. Therefore, simply raising the age limit or adding more restrictions might not be the real solution.

I watched an interview with Governor DeSantis, and he clearly cares deeply about this issue. However, he mentioned that his own children are young and do not have phones yet. This might mean he hasn’t yet experienced the real-world challenge parents face once their child owns a personal device.

The Reality: Social Media is Here to Stay

We cannot ignore social media. Even if we delay giving our kids smartphones, they will eventually join these platforms. Social media offers many benefits, but it also comes with risks.

We need to treat this like road safety. We don’t forbid children from ever going outside; instead, we teach them how to cross the street safely. We need to offer the same guidance for the digital world.

A Better Solution: Empowering Parents

In my opinion, regulation shouldn’t just be about banning access. It should focus on encouraging tools that help parents guide and protect their children.

Here are three areas we should focus on:

  • Parent Education: We must teach parents about the specific dangers of social networks, such as cyberbullying, grooming (predators), and dangerous trends. Parents also need to know that tools exist to help them stay involved and keep their kids safe.

  • Motivating Tech Companies: We should encourage communication companies and social networks to build better child protection tools. This should be done through positive rewards, not just punishment. If we only threaten companies with fines, they might choose cheap, low-quality solutions. We want them to aim for effectiveness.

  • Balancing Privacy and Safety: This is a big challenge. Mobile systems like iOS (Apple) and Android are built to protect user privacy. This is usually good, but it often blocks child safety apps from seeing the data they need to identify danger. We need new standards that protect a child’s privacy while still allowing parents to be alerted when their child is at risk.

Moving Forward

At PureSight, we have spent many years solving child safety issues on digital platforms. We are eager to work with organizations around the world to create a safer digital environment for our children.

Don’t wait, Schedule Puresight demo today!