What we know so far about how the government’s social media ban for under-16s will work

The federal government has taken a big step towards realising its ambitious plan to get children and young teenagers off social media.

After first announcing its intention to introduce legislation to set a minimum age for social media back in September, but staying quiet on what that minimum age would be, Prime Minister Anthony Albanese on Thursday announced he had settled on 16: the same age the Coalition has previously said it would support.

With both major parties in broad agreement and two sitting weeks left in this term, it seems likely the government will be able to achieve its aim of passing the legislation before the end of the year.

But that doesn’t mean 15-year-olds will be kicked off TikTok next week and there are still many outstanding questions about how it will all work. Here’s what we know about the changes so far.

Who will the ban cover?

Australian children and teenagers under the age of 16, even if they already have a social media account.

That was made clear when Mr Albanese told reporters on Thursday that the legislation would not include “grandfathering arrangements” — in other words, social media users who are still under 16 when the ban comes in will be kicked off, assuming it passes parliament.

The ban will also theoretically cover younger teenagers who have parental consent, although the government has said individuals won’t be punished if they don’t comply.

So, when does it start?

Not for at least a year.

First, the bill has to get through the House of Representatives and the Senate, and the soonest that will happen is towards the end of this month, during the next sitting weeks.

After that, social media platforms will have 12 months to get organised before the law comes into force. So in reality, the ban will only impact users who are currently under the age of 15.

The reason platforms are getting a year’s grace is that the technical solutions might be hard to pull off.

What platforms will it apply to?

The ban might apply to more platforms than you think because the law’s definition of social media is very broad.

Obvious ones like Instagram, TikTok, Facebook, Snapchat and X would all be captured, and Communications Minister Michelle Rowland said on Thursday that “YouTube would likely fall within that definition as well”.

The definition may also capture gaming platforms such as Roblox and chat platforms such as Reddit or Discord, plus many smaller players.

We know that there will be an exemption framework for “low-risk” platforms as determined by the eSafety commissioner.

Ms Rowland said the idea was to create a “positive incentive” for platforms to lift their game.

The government’s language so far suggests that achieving an exemption will at least be within reach for some of the best-known platforms, but the bar is likely to be high and the details remain to be seen.

The exemptions framework might also help exclude any health or education-focused apps that meet with the technical definition of a social media platform but carry few of the same practical risks.

The Coalition has expressed scepticism about the prospect of any of the major players being exempted.

How will it be policed?

Under the proposed changes, it will be the responsibility of social media companies to take reasonable steps to block people under 16.

Parents and children who break the rules won’t be punished, on the other hand.

As for keeping an eye on whether the tech giants are taking those “reasonable steps”, that responsibility falls to the office of the eSafety commissioner.

The commissioner will issue “regulatory guidance” that sets out what reasonable steps should be taken by platforms, and that might include several different technical options, which we’ll cover in a moment.

If platforms fail to comply, they’ll be fined. We don’t know how much yet, but the government is signalling it will be boosting the current upper limit.

“Currently, under the legislation, the maximum fines are less than a million dollars. That’s far less than would apply under consumer law, for example,” Ms Rowland said.

How will your age be verified?

The government’s legislation won’t specify the technical method for proving a person’s age.

Several options are on the table, including providing ID and biometrics such as face scanning.

The government’s currently running an age assurance trial to assess all the methods, and it’s scheduled to continue into 2025.

Based on the results of that trial, eSafety commissioner Julie Inman Grant will make recommendations to platforms.

julie inman grant speaks at a press conference

The eSafety commissioner will be tasked with oversight and enforcement of the new rules, under the proposed changes.  (ABC News: Adam Kennedy)

It’s possible that Australians will be asked to provide their IDs or biometric data directly to social media companies in order to use their platforms, but that’s not guaranteed.

Many of the big players, including Meta, have instead argued for the age verification onus to be placed on app stores, rather than individual platforms, as that would mean proving your age once — rather than every time you sign up to a platform.

It’s also possible that a third-party company that specialises in ID verification will act as a go-between between users and social media platforms.

No matter which model is adopted, the prime minister has said privacy protections will be introduced to cover any data people end up providing.

Why does the government want a ban?

The government argues unfettered access to social media is having a negative impact on the mental health of children and young teenagers.

Mr Albanese put it this way: “I don’t know about you, but I get things popping up on my system that I don’t want to see, let alone a vulnerable 14-year-old.”

“The fact is that young women see images of particular body shapes that have a real impact in the real world. And young men through some of the misogynistic material that they get sent to them, not because they asked for it.

“If you’re a 14-year-old kid getting this stuff at a time where you’re going through life’s changes and maturing, it can be a really difficult time.”

A woman and man stand behind two podiums in an outdoor courtyard.

The government shared some details about their proposed legislation on Thursday. (ABC News: Ian Cutmore)

So, why now? Age restrictions have been in the news for most of 2024, initially as a way of preventing children from accessing pornography, and the Coalition announced its support for a social media age limit in June.

Outside of Parliament House, a parent advocacy group called 36 Months gathered more than 125,000 signatures calling for policy change.

And of course, there’s also an election on the horizon and only two weeks left in the parliamentary year.

What does the industry say?

Social media companies aren’t thrilled with the current approach.

DIGI, an industry group that lobbies on behalf of social media platforms, has described the proposed ban as a “20th-century response to 21st-century challenges”.

“Rather than blocking access through bans, we need to take a balanced approach to create age-appropriate spaces, build digital literacy and protect young people from online harm,” managing director Sunita Bose said.

A hand holds a mobile phone with the screen showing several social media apps

Children and young teenagers could be banned from social media platforms even if they’ve already got an account. (ABC News: Stephen Opie)

Meta, the tech behemoth behind Facebook and Instagram, is also pushing back against some of the changes, arguing that parents would prefer to have control of their children’s social media access — rather than be subject to a blanket ban.

“We’re obviously going to comply with whatever government decision is made,” said Meta’s head of global safety, Antigone Davis.

“But I really would love for us to see a system that … really listens to what parents have said.”

This post was originally published on this site