Meta, the parent company of Facebook, Instagram, and WhatsApp, announced Tuesday that it would begin rolling out measures that restrict what kind of content young people can access, who they can talk to, and how much time they spend on special media. The new measures will begin with an Instagram rollout that began September 17 in the US, but will eventually be implemented on Facebook and WhatsApp, too.
The new policies include automatically making Instagram accounts of users 16 and under private, limiting who can contact teen accounts or tag them in posts, muting certain words associated with online bullying, and defaulting to the most restrictive content access, as well as encouraging young people to spend less time on the app.
The new protocols come after years of discourse regarding the effect of social media use on young people, with pundits and politicians arguing that social media and smartphones are to blame for a decline in teenagers’ well-being.
Legislation and lawsuits have blamed social media for issues ranging from bullying and suicidal ideation to eating disorders, attention problems, and predatory behavior. Meta’s new policies gesture toward those concerns, and some may have positive effects, particularly those geared toward privacy. But they also address the rhetoric of politicians rather than teenagers’ well-being and come even as some experts caution that there’s no causal relationship between youth social media use and those poor outcomes.
Meta is trying to address lots of criticism about its effect on teens
Meta and other social media companies have been subject to intense scrutiny for their perceived ill effects on the mental health and well-being of young people. Cyberbullying, eating disorders, anxiety, suicidal ideation, poor academic outcomes, sexual exploitation, and addiction to social media and technology are all concerns that Meta’s new Instagram protocols were designed to address.
In recent years, reporting — like the Wall Street Journal’s 2021 series Facebook Files — has explored how Meta’s leadership knew that Instagram could be toxic for teen girls’ body image, yet did not try to mitigate the risks to vulnerable users. Surgeon General Vivek Murthy has also placed the blame for increasing rates of depression and anxiety on social media use; his office released a report last year warning that social media use was a leading contributor to a decline in young people’s mental well-being.
The report says that up to 95 percent of American children ages 13 to 17 use social media, and nearly 40 percent of children ages 8 to 12 do, too. “At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents,” the report’s introduction states, and cites excessive use, harmful content, bullying, and exploitation as the main areas for concern.
Murthy also called for a surgeon general’s warning label on social media — similar to the one on cigarette packs and alcohol bottles warning about those products’ risk to health — in a New York Times op-ed in June. The op-ed also called for federal legislation to protect children using social media.
Such legislation is already making its way through Congress — the Kids Online Safety Act (KOSA). KOSA passed the Senate in July and is headed to the House for markup Wednesday; it’s not clear whether any version of the bill will end up passing both chambers, but President Joe Biden has indicated that he would sign such a bill if it did.
The version of KOSA that passed earlier this summer would require companies to allow children or teen accounts to turn off targeted algorithmic features and limit features that reward or enable sustained use of the platform or game in question. It would also require companies to limit who could communicate with minors, as Meta’s new policies do; “prevent other users […] from viewing the minor’s personal data”; and mitigate and prevent harms to teen mental health.
The Senate-approved version of KOSA goes further than Meta’s new teen account policies do, particularly when it comes to young people’s data privacy, and it’s unclear what effect the Instagram Teen accounts will have, if any, on legislation surrounding young people’s social media use.
Who are the new protocols for, and will they make teens’ lives better?
The language in Meta’s press release is geared toward parents’ concerns about their children’s social media use, rather than young people’s online privacy, mental health, or well-being.
The reality is that Meta’s teen accounts, as well as the KOSA legislation, can only do so much to address cultural and political fears about what social media does to children’s well-being because we simply don’t know that much about it. The available data does not show that social media use has more than a negligible outcome on teens’ mental health.
“A lot of things that are proposed to fix social media are not really questions of scientific rigor, they’re not really questions about health or anxiety or depression,” Andrew Przybylski, a professor of human behavior and technology at Oxford University, told Vox. “They’re basically matters of taste.”
Stetson University psychology professor Christopher Ferguson, who studies the psychological effect of media on young people, said that in his view the uproar over social media’s effect on kids’ well-being has all the makings of “a moral panic,” echoing earlier generations’ concerns that radio, television, the role-playing game Dungeons & Dragons, and other new media would ruin the minds and morals of children.
It’s unclear exactly what metrics Meta plans to use to decide whether the new rules are helping children and parents; when asked about those metrics, Meta spokesperson Liza Crenshaw only told Vox that the company would “iterate to ensure Teen Accounts work” for Instagram users. Crenshaw didn’t respond to follow-up questions by publication time.
“These all look like good-faith efforts,” Przybylski said. “But we don’t know if it’s going to work.”