It’s easy to find misinformation on social media. It’s even easier on X.

Many changes have come to X, the platform once known as Twitter, since billionaire Elon Musk completed his acquisition of the company almost two years ago.

One of the largest has been the rise in misinformation. A platform that used to downgrade hoaxes, conspiracy theories and false claims has become one where even the boss now spreads the stuff. That change didn’t happen immediately, but the shift of X from a useful information source to a locus of misinformation has alarmed fact-checkers worldwide.

“It used to take work to find disinformation on Twitter. You had to set up dashboards and Boolean searches,” said Maarten Schenk, co-founder and chief technology officer of Lead Stories. “Nowadays, you just check the trending topics. You can also see what Elon Musk retweets or amplifies.”

The relationship between large technology companies and professional fact-checkers has always been contentious, with fact-checkers accusing the platforms of not doing enough to combat the spread of misinformation. But at least there is a relationship. Meta — which owns Facebook, Instagram and WhatsApp — partners with independent fact-checkers to review and rate posts on its platforms, for example. TikTok operates a similar program.

X under Elon Musk has shown no interest in doing the same. It does not have a formal relationship with fact-checkers and instead relies on its crowdsourced fact-checking program “Community Notes,” Maldita.es co-founder and CEO Clara Jiménez Cruz said. While experts acknowledge that there are some advantages to the Community Notes system, it also has its flaws, allowing many pieces of mis- and disinformation to go unchecked and viral.

“Even though Twitter has never made huge efforts to tackle misinformation, (there’s) clearly a less prominent effort right now,” Jiménez Cruz said. “It doesn’t only involve misinformation itself, but also hate speech and other forms of manipulative content.”

X did not respond to a request for comment.

The result, fact-checkers say, is a worse experience for users as misleading and hateful posts clutter people’s feeds and disinformation campaigns run rampant. Many worry about the effects of those campaigns, especially during a year when nearly half the world’s population votes in national elections.

“These platforms are contributing, in more ways than one, to how stable societies are or would be in the future,” said FactSpace West Africa director Rabiu Alhassan. “We are talking about very vulnerable, young democracies on the (African) continent with emerging youthful populations that are using these platforms. … They (tech companies) have the responsibility to ensure that their platforms are not used to spread misinformation and disinformation that really has the potential to undermine and destabilize a number of countries.”

Elon Musk’s acquisition of Twitter in 2022 ushered in a number of changes: mass firings, an overhaul of the platform’s verification system, a new name. It also marked a shift in the company’s interactions with fact-checkers.

Prior to Musk, Twitter had relationships with a few news organizations, including The Associated Press and Reuters, said Alex Mahadevan, director of Poynter’s Mediawise. The company worked with fact-checkers as it built Birdwatch, the crowdsourced fact-checking apparatus that was later renamed Community Notes under Musk. When misinformation hit the trending topics page, Twitter would sometimes append fact checks, according to Schenk.

Now, misinformation often goes unchecked. In fact, Musk has emerged as a major spreader of misinformation, amplifying false claims to his 197 million followers. The Center for Countering Digital Hate found that Musk made 50 false or misleading posts about the U.S. elections between Jan. 1 and July 31, generating nearly 1.2 billion views, and The Washington Post reported last week that Musk’s online posts have coincided with harassment campaigns towards election administrators.

The format of X Premium, the platform’s subscription service, also incentivizes people to spread misinformation, fact-checkers said. Those who are subscribed can monetize their X page and get paid when people engage with their posts.

“You want more engagement,” Newschecker managing editor Ruby Dhingra said. “That incentivizes people to write things that might not necessarily be true or that might create controversy.”

Musk and X have shown little willingness to engage with fact-checkers. Musk, for example, has called fact-checkers “biased” and “liars.” X has also withdrawn from the EU Code of Practice on Disinformation, a voluntary initiative whose signatories — which include tech companies like Meta, TikTok and Twitch — commit to countering disinformation.

“Basically, the impression among fact-checkers in Europe is that disinformation is more legitimized now on (X), and they don’t seem to care,” Maldita.es public policy officer Marina Sacristán said.

The European Fact-Checking Standards Network published a report earlier this year examining antidisinformation measures at 10 online platforms and search engines. In surveying European fact-checkers, they found “not a single one considered that X takes the disinformation problem seriously” and any already signed contracts Twitter had with fact-checkers prior to Musk never took effect after the change in ownership.

Trying to contact X is often an exercise in futility, fact-checkers said. X has shut down many of its international offices, leaving fact-checkers with no one to turn to when issues arise. Jiménez Cruz said that Twitter used to have a fairly large office in Spain that served as Maldita’s point of contact with the company. “We don’t have that anymore, and this is something that we see all around the world when we talk with fact-checkers.”

Lead Stories, which operates in multiple languages, recently tried to start an X account for its Ukrainian fact checks, but the account was suspended, Schenk said. Attempts to contact X have been unsuccessful.

“We tried the support forum several times to get it (the suspension) lifted, but they’re not even responding, despite Elon saying any content that is legal should be free speech,” Schenk said. “Well, leadstories_UA is definitely not working, and our content is definitely legal. So I don’t know what the problem is.”

It’s a stark contrast to Twitter’s previous relationship with fact-checkers and journalists. The company used to have a “really strong” curation team, said Mahadevan. Journalists were given blue checks to show that they could be trusted, and it was easy to find high quality news.

“Twitter was a very friendly place for journalists, and I think the quality of information there reflected that,” Mahadevan said. “And now X is a place that’s very hostile to journalists and fact-checkers, and the quality of information reflects that.”

While companies like Meta and TikTok have engaged professional fact-checkers to help counter misinformation on their platforms, X has chosen to outsource its fact-checking to the masses.

Community Notes, X’s fact-checking initiative, allows participants to propose notes to be displayed under misleading posts. Anyone who meets the basic requirements — a six-month old account with a verified phone number and no X rules violations — can join. Users write and vote on notes, and if a note gathers enough votes from people representing different points of view, it will be made public.

“What this means in practice is that conservatives and liberals have to agree that a fact check is appended to a tweet before it goes public and before anyone can see it,” Mahadevan said. “And because everything is essentially downstream from politics now, that is something that is very, very hard to do.”

The algorithm requiring consensus across people from different viewpoints is an attempt to prevent brigading, coordinated campaigns to support or oppose a proposed note. But it also means that notes can languish in the system for days before they are made public — if they even reach that point. Yuwei Chuai, a doctoral researcher at the University of Luxembourg’s Interdisciplinary Center for Security, Reliability and Trust, said that of the notes that get displayed, it takes an average of 75 hours from the time of the post’s creation.

A Maldita.es analysis of disinformation surrounding the European Parliamentary elections this year found that X did not take visible action in 70% of cases, and Community Notes only appeared on 15% of the posts that were debunked by independent fact-checkers.

As Community Notes has grown, the discourse within the system has become more toxic, Mahadevan said. On some accounts, such as those affiliated with Vice President Kamala Harris and former President Donald Trump, every single post has a requested note.

“Community Notes has become basically ‘well, actually’ partisan bickering,” Mahadevan said. “The thing that concerns me too is if you have 100,000 contributors that can see all of these private notes that haven’t gone public, they’re still seeing misinformation. There’s tons of misinformation that’s shared in the Community Notes platform and may not always go public, but it’s there.”

Community Notes participants are anonymous, so anyone can contribute, regardless of expertise, and there are no consequences for writing an incorrect note. Sometimes, those incorrect notes end up being made public.

Factchequeado co-founder Laura Zommer said she’s concerned about the anonymity of Community Notes’ contributors and the lack of transparency around the algorithm that decides whether notes get surfaced. Fact-checkers, she said, commit to certain standards like nonpartisanship, transparency, having a corrections policy and being open about their methodology and financing: “I’m not sure X is doing all that.”

Still, fact-checkers and experts said the Community Notes model comes with some advantages. The sheer number of people on the platform allows for more content to be fact-checked, and anyone can request a Community Note for a post.

“You’re able to tap into a much wider and larger set of people,” said Jennifer Allen, a post-doctoral researcher at the University of Pennsylvania. “And so it’s just a much more scalable model than Facebook paying a few experts and expecting them to be able to fact-check the entire internet.”

Community fact-checking can also “engender more trust” among people who do not trust professional fact-checkers, Allen said. Several fact-checkers also pointed out that the notes use language that avoids patronizing users; they alert people to additional context without hiding the original content.

“It avoids accusations or loaded language like ‘This is false,’” Schenk said. “That feels very aggressive to a user.”

Studies into the effectiveness of Community Notes vary. Chuai and other researchers found that the display of a Community Note did not significantly reduce overall engagement with misleading posts. But in a subsequent analysis published as a preprint, those researchers found that misleading posts received less engagement after a note had been appended.

YouTube announced in June that it was experimenting with a new fact-checking system very similar to X’s Community Notes. Mahadevan said he is not surprised and anticipates crowd-sourced fact-checking models to spread to other platforms. “Platforms want moderation for cheap, and this is basically it.”

With the U.S. elections fast approaching, Zommer said that she is pessimistic about the possibility that X will ramp up its efforts to stem misinformation.

“Reality doesn’t show us that the incentives of public interest journalism and big tech are aligned.” Zommer said. “We are trying to provide better information for people. They are trying to (get) more money for their accounts.

“All the dreams related to big tech or platforms bringing better democracy are, I think, old-fashioned. Naive.”

This post was originally published on this site