Australia has taken the easy way out in protecting children from social media harms, and Canada and other countries are poised to follow suit.
Australia has banned access for children under 16 to some of the most popular social-media platforms including Facebook, TikTok, YouTube, Instagram, X, Snapchat and Reddit.
The ban took effect on Dec. 10. Australians under age 16 are no longer permitted to have accounts on those platforms.
At first glance, Australia’s world’s first ban appears laudable and worth emulating.
It is nothing of the kind, however.
The ban fails to regulate social media companies. The content they provide — and, indeed, “push” at users with their algorithms — continues to go unsupervised and unregulated.
The ban does nothing to prepare teens under 16 from the social media dangers they will encounter once they are older.
Denied access to platforms that have at least some guardrails, many kids will be driven to explore the dark corners of the internet.
And kids still have private messaging and access to other platforms that have not been banned, such as Discord.
Children who find ways to continue using the banned sites — and many have told pollsters that they are doing so — will be unlikely to report online harm they have experienced.
Knowing that they broke the law in accessing a banned site where they were harmed, many under-16s will try to process the damage on their own.
And adults will continue to be vulnerable to the range of internet harms.
Critics of the ban have correctly noted that online content and behaviour that is problematic for 16-year-olds is likely to be problematic for people older than that arbitrary age threshold.
One thing that advocates of bans can’t be accused of is lack of just cause.
A short list of social media harms includes scams, cyberbullying, “revenge porn,” algorithms that push hateful, racist and violent content at users, and predators who use social media to target their victims.
Part of the appeal of bans for politicians is that they cost government practically nothing to impose. This is gesture politics by which governments give the appearance of doing something useful about a problem they in fact are not seriously addressing.
If we are serious about eradicating social-media harms, there are two substantial measures we can adopt.
We can educate kids on safe social media use, incorporating safe-use instruction into mandatory school curricula.
And we can finally regulate a social media industry that continues to be a lawless frontier where TikTok’s estimated six million Canadian monthly active users, mostly young people, are on their own in protecting themselves from online harms.
We need formal education to teach children and young adults to become sophisticated digital citizens, well-versed in using social media to research a class project or a business presentation, and able to identify and report digital harms.
But that kind of education — in safe use, media literacy, and privacy rights — costs money that governments aren’t willing to invest.
Neither are they willing to regulate social media. The platforms are not required to design and test their products for safe use by children and adults.
Social media products are not subject to regulatory approval before they hit the market, the way new medications, food products and airplanes are. We learn of their dangers only after the fact.
“The onus should be on the platforms to change how they are designed and run,” professors Kaitlynn Mendes and Christopher Dietzel of Western University and Carleton University, respectively, wrote recently in a Toronto Star op-ed.
For governments, “the right and brave move then is actually to regulate social media companies.”
Researchers at the Brookings Institution, a Washington, D.C. think tank, agree.
“If governments aim to build a safer digital environment, the more effective path is to hold technology companies accountable for the design of their platforms and the content moderation systems that allow harmful material and behaviours to persist,” write Brookings researchers Nicol Turner Lee, Josie Stewart and Carolina Oxenstierna in a recent report.
“Rather than banning access to online spaces that educate, connect, and empower young people, lawmakers should focus on regulating the underlying systems — data collection, algorithms, and content moderation — that power harmful dynamics online.”
Platforms should be required to report worrisome content on a user’s account or algorithm regardless of age — commonplace suicidal iterations, for example.
With luck and some public pressure, Canada’s forthcoming Online Harms Bill will provide genuine reforms, including a digital ombudsperson and a transparency requirement that platforms disclose problematic behaviour on their platforms.
Those reforms would cost money to administer. A tax on social media firms would easily cover the expense of protecting all Canadians.
With combined 2024 profits of $290 billion for Meta Platforms Inc. (Facebook, Instagram), Alphabet Inc. (Google, YouTube) and ByteDance Ltd. (TikTok), social media platforms can afford the cost of finally becoming socially responsible.