Meta’s changes to social media policing will lead to clash with EU and UK, say experts

Sweeping changes to the policing of Meta’s social media platforms have set the tech company on a collision course with legislators in the UK and the European Union, experts and political figures have said.

Lawmakers in Brussels and London criticised Mark Zuckerberg’s decision to scrap factcheckers in the US for Facebook, Instagram and Threads, with one labelling it “quite frightening”.

The changes to Meta’s global policies on hateful content now include allowing users to call transgender people “it”, with the guidelines stating: “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation.”

Chi Onwurah, the Labour MP and chair of the science and technology committee for the House of Commons, which is investigating how online disinformation fuelled last summer’s riots, said Zuckerberg’s decision to replace professional factcheckers with users policing the accuracy of posts was “concerning” and “quite frightening”.

“To hear that Meta is removing all its factcheckers [in the US] is concerning … people have a right to be protected from the harmful effects of misinformation,” she said. “The fact that Zuckerberg said he’s following the example of X must raise concerns when we compare how X is a platform for misinformation to a greater extent than Facebook has been.”

Meta said it would rely on social media users to check each other’s posts in a system of “community notes” similar to the one adopted by Elon Musk on X. It has prompted concerns of misinformation emanating from the US about issues including elections, health, pandemics and armed conflicts and spilling into digital feeds all around the world, where Meta has more than three billion users.

On Wednesday the Nobel peace prize-winning American-Filipino journalist Maria Ressa predicted “extremely dangerous times” ahead for journalism, democracy and social media users. She faced criminal charges after running stories critical of the former Philippine president Rodrigo Duterte. She said Meta was going to “allow lies, anger, fear and hate to infect every single person on the platform”.

Meta’s move, which Zuckerberg made clear was a response to the incoming Donald Trump presidency, also prompted predictions that a major challenge is coming from the Trump administration against laws such as the Online Safety Act.

The former UK technology minister, Damian Collins, said such a challenge will “most likely be made through trade negotiations where pressure will be brought against the UK to accept American standards for digital regulation”.

He said: “We must stand firm against such proposals, which would remove any chance we have to hold tech executives to account and require them to enforce the safety standards on their platforms that are set in our laws.”

A Meta whistleblower told the Guardian: “I am extremely concerned about what this means for teenagers.”

Arturo Béjar, a former senior engineer whose responsibilities at Meta included child safety measures, said: “They will be increasingly exposed to all the content categories that they need to be protected against.”

Harmful content, including violent or pornographic material, could reach young users more easily, Bejar said, citing Zuckerberg’s statement that tackling “lower severity” transgressions will now rely on users flagging content before Meta acts on it.

In Brussels, the European Commission hit back against Zuckerberg’s statement on Tuesday in which he cited Europe as a place with “an ever-increasing number of laws institutionalising censorship – a reference to the EU’s own Digital Services Act, which regulates online content.

A spokesperson for the EU’s executive arm said “we absolutely refute any claims of censorship” and that “absolutely nothing in the Digital Services Act forces or asks or requests a platform to remove lawful content”.

Zuckerberg has said his policy of ditching factcheckers applies only in the US for now, but his broadside against Europe has raised concerns that he is planning to roll out the approach in Europe.

Meta will face intense regulatory scrutiny if it does so in the UK and EU, said Arnav Joshi, a senior technology lawyer at the law firm Clifford Chance.

“If there is a move away from human factcheckers and towards more automation, regulators will want to see evidence of the efficacy of these changes – this has proved difficult to quantify and justify in the past.”

Valérie Hayer, an MEP and the leader of the centrist Renew Europe grouping in the European parliament, said: “The EU will remain uncomfortable for social media giants by standing up for the integrity and independence of free expression and democratic processes. Europe will never accept manipulation and disinformation as a standard for society. By abandoning factchecking in the US, Meta is making a profound strategic and ethical mistake.”

Oliver Marsh, a former Downing Street adviser and the head of technology research at the Berlin and Zurich non-profit Algorithmwatch, said: “If these policy changes mean you can spread lies that end with attacks on groups then there is a case Meta would be going against the EU’s digital services act. The question is does Zuckerberg care? His decisions – and the increasing likelihood he would refuse to comply with any enforcement action to impress Trump brings us closer to the moment when the EU may have to decide if it has the powers to ban Meta, or how else they could hold them accountable.”

While Meta said content about suicide, self-injury and eating disorders would still be considered “high-severity violations” and it “will continue to use our automated systems to scan for that high-severity content”, the NSPCC, the UK child protection charity, voiced concerns.

Rani Govender, its regulatory policy manager for child safety online, said: “Meta needs to set out how the risks of harm to children in the UK is not being increased by the removal of factcheckers in America and its changes in content policies.”

This post was originally published on this site