Opinion: Canada’s Proposed Social Media Ban for Under-16s Misses the Mark

14 Min Read
- Advertisement -
Ad image

Ottawa, ON, Canada (WNEWS OPINIONS) – Prime Minister Mark Carney says Canada should debate whether children under 16 should be barred from social media as part of revived online harms legislation. However, he has not committed to a final position. That matters, because once government starts treating access to social media as something to be licensed, gated or restricted by law, the debate quickly stops being only about kids. It becomes a much broader question about privacy, identity, speech and state power online. 

That is why Ottawa should tread very carefully.

There is no serious argument that children face real risks online. Harmful content, predatory behaviour, compulsive design, harassment and algorithmic rabbit holes are real concerns. Even critics of a ban do not dispute that. The real dispute is over the remedy. A blanket or near-blanket age-based restriction sounds simple in a headline. In practice, it is anything but simple. And the evidence emerging from other countries suggests these laws create new problems even as they try to solve old ones. 

Canada has already signalled that online harms legislation remains on the table after Bill C-63 died on the Order Paper, and federal briefing materials say Ottawa is still assessing how best to move forward. In recent days, Carney also said that if Canada does revisit online harms, the question of a social media “age of majority” would naturally be part of that discussion. 

- Advertisement -

That should concern every adult, not just every parent.

The real-world problem: age checks never stay narrow

The political sales pitch behind these proposals is usually straightforward: protect minors, target platforms, keep dangerous content away from children. But the enforcement reality is much messier.

To prevent under-16s from opening or keeping accounts, platforms need a way to verify age. That means some form of age assurance, age estimation, age verification, or identity checking. Official guidance in both Australia and the U.K. makes clear that platforms may use combinations of tools, including age estimation, secondary checks, and, in some circumstances, stronger verification methods. The U.K. government has openly pointed to facial scans, photo ID and credit card checks as examples of methods being used for compliance in the online-safety space. 

And that is where the civil-liberties problem begins.

Even if governments say they do not want every user to hand over ID, systems built to exclude minors inevitably pressure platforms toward broader screening. Australia’s privacy regulator and eSafety guidance both stress that platforms must take “reasonable steps” to prevent under-16s from holding accounts, while the government has also warned that asking all users for age proof could itself be unreasonable. That sounds reassuring until you realize the contradiction: platforms must reliably identify minors, but are told not to over-collect data from everyone else. In the real world, companies facing fines tend to over-comply, not under-comply. 

- Advertisement -

That leaves ordinary users caught in the middle.

If a platform wrongly flags an adult as underage, that adult may be asked to prove who they are. In Australia, Meta told users who were mistakenly flagged they could appeal using government ID or facial age estimation, and AP reported concerns around error rates in that process. 

That is the practical issue Canada should focus on before it copies slogans from abroad: a law aimed at minors can still force adults into verification pipelines.

Data breaches make this far worse

This problem would be serious at any time. It is more serious in an era of constant data leaks, account theft and identity fraud.

- Advertisement -

Once age verification becomes the norm, platforms and third-party vendors have strong incentives to collect more signals about users: date of birth, device patterns, facial scans, government ID images, payment-linked checks, or risk scores generated behind the scenes. Regulators in Australia and the U.K. both insist privacy law still applies and that data protection must be respected. But those assurances do not eliminate the underlying risk. They merely acknowledge that the risk exists. 

And for users, the question is simple: why should access to lawful online services increasingly depend on handing over more sensitive personal information than before?

Supporters of these laws often reply that platforms already know enormous amounts about us. That is true. It is also a weak argument for requiring them to know even more.

Australia’s rollout shows how fast “reasonable steps” becomes a compliance mess

Australia is now the clearest test case because its law took effect on December 10, 2025, requiring age-restricted platforms to take reasonable steps to prevent Australians under 16 from holding accounts. Official Australian guidance frames the policy as a “delay” rather than a ban on social media use, and says the legal burden falls on platforms, not children or parents. 

But even in that softer framing, the implementation problems are obvious.

Reuters reported that major platforms were ordered to block children or risk massive fines. AP reported platforms including Facebook, Instagram, TikTok, X, Reddit, Snapchat, Threads, Twitch and YouTube were affected, with penalties reaching up to A$50 million for non-compliance. Google called the law “extremely difficult” to enforce and warned it might not improve safety the way supporters hope. 

The early compliance numbers were striking. The Guardian reported that Meta blocked more than 544,000 accounts in the first days after the restrictions took effect. That sounds impressive until you ask the next question: how many were correctly identified, how many were wrongly swept up, and how many teens moved elsewhere? The same report said some young users were already boasting online about bypassing the controls, while others migrated to less-regulated platforms. 

That is one of the central weaknesses of access bans. They do not eliminate demand. They redirect it.

A determined teenager may not stop using online services. They may just move to fringe apps, lie about their age, borrow an adult’s credentials, or use tools that are less transparent and less moderated. When that happens, the law can push minors away from larger services that at least have trust-and-safety teams and toward corners of the internet where oversight is thinner. That is not a theoretical concern; Australia’s own consultation materials explicitly referred to possible circumvention and unintended consequences. 

Australia is also still dealing with scope problems. Which services count as “social media”? Which are educational, messaging, gaming or video-sharing services? Which platforms are exempt, and why? Those classification fights are not minor details. They go to the heart of fairness and enforceability. 

The U.K. offers a different warning: mission creep

The U.K. is not the same as Australia. It does not currently have a blanket under-16 social media ban in force. But Britain’s Online Safety Act already requires strong age checks for pornography and certain harmful content, and ministers are now openly considering going further, including an Australian-style social media ban for under-16s. 

That distinction matters, because it shows how these systems evolve.

What begins as a rule for one category of harmful content can become a broader architecture for restricting access across more services. Reuters reported this week that age-checking technology is rapidly becoming central to a wave of child-safety laws, while the U.K. is tightening rules and watching the Australian experience closely. Ofcom itself has said it must produce a statutory report by July 2026 on how age assurance is being used and how effective it has been. 

In other words, even British regulators are still in the evidence-gathering phase on major parts of the regime.

That should make Canadian lawmakers skeptical of claims that the model is settled and proven.

The U.K. also highlights another problem: once age checks become part of the system, users start looking for ways around them. Ofcom’s own materials discuss the need for age assurance to be robust against circumvention, which is regulator-speak for a very human reality: people will try to bypass the controls. 

That creates its own risk chain. The more governments pressure platforms to verify users, the more some users will turn to workarounds such as burner accounts, borrowed identities, or VPNs. The more those workarounds spread, the louder the argument for even more intrusive enforcement grows. That is how mission creep happens: first age gates, then stronger identity checks, then pressure on app stores, device makers and ISPs. Reuters has already reported that policymakers in multiple jurisdictions are swapping notes on these approaches. 

Why this matters for adults, journalists and dissenters

Age verification debates are often framed as if only children are affected. That is misleading.

Adults rely on the ability to access online services without repeatedly proving who they are. Journalists use pseudonymous spaces to source stories. Whistleblowers, dissidents, abuse survivors and vulnerable users often depend on anonymity or partial anonymity for safety. Even when a law does not formally require “real-name” internet use, systems that normalize ID checks chip away at that principle in practice.

The U.K. ICO’s own guidance effectively acknowledges this tension by urging proportionate, risk-based approaches and layered methods rather than defaulting automatically to the most intrusive checks. That guidance exists for a reason: because the more certainty a platform seeks, the more personal data it may need to process. 

Canada should not casually build a legal regime that makes lawful participation in digital public life increasingly contingent on age scoring, biometric analysis, or identity proof.

Parents and platforms should do more — but government should not become the gatekeeper

There is a better argument available to Ottawa than “do nothing,” and it does not require a ban.

Governments can strengthen digital literacy, support parental controls, enforce laws against exploitation and harassment, demand platform transparency, and require safer design for minors without turning every user into a verification subject. Canada can also target platform features that are genuinely dangerous to minors, such as predatory recommendation loops, addictive design choices, weak default privacy settings, and poor moderation pathways.

That is a much smarter approach than empowering the state to decide, in broad strokes, who gets to participate online.

Because once that power exists, it rarely stays confined to the original promise.

The lesson for Canada

Australia shows the enforcement chaos: huge account takedowns, disputes over platform scope, complaints about false flags, easy circumvention and continuing pressure toward more data collection. 

The U.K. shows the regulatory creep: age checks introduced for harmful content, ongoing questions about privacy and effectiveness, and an active political push to widen the model further. 

Canada should learn from both.

A free society should be very cautious before letting government set the terms of entry to lawful digital speech. Protecting children online is a legitimate goal. But creating a system that pressures millions of people to verify their age, disclose more personal data, or risk losing access to mainstream platforms is not a narrow child-safety measure. It is a structural change to the internet.

And once built, that structure will not be easy to dismantle.

The federal government should resist the temptation to copy the toughest-sounding overseas model simply because it polls well. Ottawa’s job is to protect children without compromising privacy, normalizing digital ID checks, or making government the referee of who gets to use social media in the first place.

That is not a conservative principle or a progressive principle. It is a democratic one.

Share This Article
WNews Opinion brings the top commentary of the leading issues facing the world today.
- Advertisement -
Ad image
Leave a Comment
Report a Error with this Story

Notice a error or facts with this story, please submit the information below and someone from our newsroom will review it and change if required 

Reading: Opinion: Canada’s Proposed Social Media Ban for Under-16s Misses the Mark

(C) 2012 – 2024  | WNews Broadcasting Corp, a W-World Company | All Rights Reserved

Connect
with Us