Meta has announced a major overhaul of its content moderation policy, aimed at restoring its platforms as spaces for open discussions and reducing excessive control that has been criticized by users. As part of this, Facebook, Instagram, and Threads will introduce the Community Notes feature, which will replace the independent fact-checking program that existed on Meta platforms since 2016.
Meta has ended its fact-checking program in the U.S., which was launched in 2016 and involved collaborating with independent organizations to verify viral misinformation and fake news. However, according to Mark Zuckerberg, the program became a tool for censorship rather than providing users with additional information.
Community Notes will replace this program, resembling an initiative on X (formerly Twitter). The feature will allow users to add context to potentially misleading posts. It will rely on contributions from users with different viewpoints to prevent bias.
Meta confirmed that it will not control the content of Community Notes or determine which notes will appear on the platforms. Users will collaboratively create and evaluate notes to ensure their accuracy and impartiality. The program will initially launch in the U.S. in the coming months, and users can already apply to participate in its testing. They can join the waiting list via Facebook, Instagram, and Threads.
Meta is also revising the use of automatic content moderation systems. The company acknowledged that automated tools lead to the erroneous removal of millions of posts every day. Meta plans to focus on serious violations, such as terrorism, child sexual exploitation, and fraud, and will rely more on user reports for less severe violations.
The company will gradually move away from excessive reduction of content visibility and will implement higher confidence thresholds before deleting posts. Meta will also relocate its trust and safety teams from California to Texas and other U.S. regions to ensure more diversity in moderation policies.
To speed up the appeal process for moderation decisions, Meta is expanding its moderation teams and using large language models (LLM) for additional content verification before removal decisions are made. The company is also exploring the use of facial recognition technology to simplify the account recovery process.
In addition, Meta is changing its approach to political content on its platforms. Since 2021, the company has reduced the visibility of posts related to elections, politics, and social issues based on user feedback. However, Meta now plans to increase the visibility of such content with a more personalized approach.
The company will rank political posts from users and pages in the same way as any other content in the news feed. Meta will consider explicit signals (e.g., likes) and implicit signals (post views) to predict what is important to users.