

Australia’s Social Media Age Ban Is Live. But Is It Actually Making Kids Safer?
Two months into Australia’s Social Media Minimum Age law, the numbers are confronting. Snapchat has confirmed it has locked or disabled more than 415,000 accounts believed to belong to under-16s. On paper, that sounds like progress. In practice, it raises a bigger, harder question parents and policymakers are only just starting to grapple with.
Are we genuinely protecting young people online, or are we pushing them into darker corners of the internet where there are fewer safeguards and less visibility?
This week, Snapchat released a blog reflecting on the law’s early impact. While the platform is complying with the legislation, its message is clear. The intent may be right, but the execution is deeply flawed.
And for parents navigating this new digital reality, that matters.
The promise of the law versus the reality
The Social Media Minimum Age law was introduced to curb harm. Less exposure to inappropriate content. Fewer risky interactions. A reset on screen time and social pressure for kids still developing emotionally and cognitively.
That goal is hard to argue with.
But implementation has proven messy. Age estimation technology, which many platforms rely on, isn’t exact. The Australian government’s own 2025 trial found it can be inaccurate by two to three years. That means some children under 16 will still slip through, while some teenagers who are legally allowed to be on platforms are incorrectly locked out.
For families already dealing with school transitions, mental health challenges and social development, being abruptly cut off from peer communication isn’t a neutral event. It can be destabilising.
And unlike previous parenting dilemmas, this one isn’t happening around the dinner table. It’s happening silently, on devices we don’t always see.
Where do kids go when they’re locked out?

One of Snapchat’s biggest concerns is not what happens on its platform, but what happens after teens are removed from it.
Snapchat positions itself primarily as a messaging service. In Australia, more than 75 percent of time spent on the app is messaging with close friends and family. When teens lose access to that, they don’t stop communicating. They migrate.
The problem is where.
Less regulated messaging apps. Offshore platforms. Smaller services with minimal moderation, weaker reporting tools and little accountability. These aren’t theoretical risks. History shows that when restrictions are unevenly applied, behaviour doesn’t disappear, it just relocates.
From a parenting perspective, this creates an uncomfortable paradox. A law designed to reduce harm may actually make it harder for parents to see what their children are doing online at all.
Why Snap is pushing app-store level age verification
Snapchat’s proposed solution is app store-level age verification. Not because it supports a blanket under-16 ban, but because if the ban exists, it should be applied consistently across the digital ecosystem.
The argument is simple. If age verification happens at the app store or device level, all apps receive the same age signal. That reduces guesswork, limits workarounds and creates fewer loopholes. It also avoids singling out a handful of platforms while leaving hundreds of others effectively unpoliced.
For parents, this matters more than it sounds. A fragmented system creates confusion. A universal one creates predictability.
It also reduces the risk of kids being pushed into platforms that have no meaningful safety infrastructure at all.
Is a blanket ban the right approach for teenagers?
This is where the debate gets uncomfortable.
Snapchat has been explicit in saying it does not believe a total under-16 ban is appropriate, particularly for platforms centred on private communication rather than public broadcasting. The company argues that cutting teens off from peer relationships doesn’t make them safer or healthier.
Many child development experts would agree that connection matters, especially during adolescence. The issue isn’t whether teens communicate online. It’s how, with whom and under what safeguards.
That doesn’t mean platforms should be left unchecked. It does mean the conversation needs to evolve beyond age alone.
What parents are actually asking

Parents aren’t naïve about social media. Most know there are risks. But they’re also asking practical questions the law doesn’t yet answer.
If my child is locked out of a mainstream platform with parental tools, where will they go instead?
How do I monitor apps that operate outside Australian regulation?
Why is responsibility placed on individual platforms rather than the system that distributes them?
And perhaps most importantly, how do we protect kids without pretending technology alone can do the parenting?
What Snapchat is doing in the meantime
While advocating for change, Snapchat says it is continuing to invest heavily in safety tools. These include two-way friend requirements for messaging, 24/7 Trust and Safety teams including staff based in Sydney, and expanded parental controls through its Family Center.
Parents can now see how much time their teen spends on different features, who they’re connecting with and whether new contacts are mutual or already known offline. These aren’t perfect solutions, but they are tangible ones.
And that’s where nuance matters.
So where does this leave families?
Australia’s Social Media Minimum Age law was born out of genuine concern. But two months in, it’s clear that protecting children online isn’t as simple as switching off accounts.
Safety isn’t just about restriction. It’s about visibility, consistency and support. It’s about recognising that teenagers will seek connection, whether adults like the platforms or not.
If this law is going to stay, the conversation needs to shift from punishment to infrastructure. From individual apps to the broader ecosystem. From age bans alone to layered protection that actually reflects how kids live online today.
Because the real risk isn’t that teens are on social media.
It’s that we lose sight of where they go when they’re not.
What do you think? Is Australia’s under-16 social media ban protecting kids, or pushing them into riskier spaces? Join the conversation at Parenthood360 and share your take with our community of parents navigating the digital age together.