Argument

An expert’s point of view on a current event.

Social Media Companies Now Work for Governments—Not Users

Content from rap videos to protest photos is being removed in the name of “national security.”

By , a senior policy analyst at the Center for Democracy & Technology, and , a project fellow at the Center for Democracy & Technology.


An illustration photograph of social networking apps on a phone screen taken in London on Feb. 20.
An illustration photograph of social networking apps on a phone screen taken in London on Feb. 20.

An illustration photograph of social networking apps on a phone screen taken in London on Feb. 20. Justin Tallis/AFP via Getty Images




As tensions between India and Pakistan rose and then fell dramatically in recent days, the Indian government used the nation’s digital governance laws to compel Meta to remove news and posts related to the evolving story. It even urged the blocking of all Pakistan-origin content.

Governments around the world are behaving similarly, gaining more and more authority to request or require the removal of content and weaponizing these laws to force the takedown of rap videos, photos from protests, and much more. Accelerating this trend are companies’ incentives to avoid these requests in the first place by preemptively and quietly suppressing content they anticipate governments will find objectionable. In too many cases, moderation at the hands of private entities and governments is practically indistinguishable.

As tensions between India and Pakistan rose and then fell dramatically in recent days, the Indian government used the nation’s digital governance laws to compel Meta to remove news and posts related to the evolving story. It even urged the blocking of all Pakistan-origin content.

Governments around the world are behaving similarly, gaining more and more authority to request or require the removal of content and weaponizing these laws to force the takedown of rap videos, photos from protests, and much more. Accelerating this trend are companies’ incentives to avoid these requests in the first place by preemptively and quietly suppressing content they anticipate governments will find objectionable. In too many cases, moderation at the hands of private entities and governments is practically indistinguishable.

In India’s case, its IT Rules, as they are known, equip law enforcement agencies and other government bodies with the authority to request that social media companies take down user-generated content in the “interest of the sovereignty and integrity of India” and to hand over user data as well. They have been used before. In September 2024, cybercrime units within regional police departments used the law to request the takedown of content after a hashtag gained traction on Facebook and X blaming the Muslim minority in southern India for supposedly bringing animal fat-laden sweets to one of the largest Hindu temples in India. The hashtag helped heighten already strained tensions—but the situation was exacerbated by the opaque and poorly governed police agency response. Charges were filed against seven X users for incorrectly identifying the sweets as belonging to Amul, a large dairy chain, and regional police also ordered the removal of countless memes that proliferated in the aftermath.

As part of a larger project studying content moderation in the global south, we have found that a majority of frequent Tamil online service users suspect they have faced moderation to silence their opinions and political beliefs. Content creators, activists, and journalists we spoke with frequently reiterated these suspicions, which were corroborated by interviews with former platform representatives, who painted a picture of increasing collusion between governments and companies to suppress speech. While our research in this paper was limited to India and Sri Lanka, the tools and tactics we documented are appearing around the world. If platforms want to keep users and their confidence, they have to be more honest and transparent when governments influence moderation and how.


New legislation passed in Indonesia, Kenya, Brazil, Europe, and beyond has supercharged government officials’, law enforcement bodies’, and courts’ authority to request or require the takedown of specific posts or categories of posts. And companies are seeing these requests pile up: Facebook reported an exponential increase in government requests for content restrictions from 2023 to 2024, and Google told media that it fielded 87 percent more requests from governments to remove content across its services from 2021 to 2023. Indonesia’s MR5 regulation has resulted in more than 140 million orders just to Facebook to geoblock posts and comments, the majority of which were related to gambling but others were “divisive political speech.”

Managing government requests for restrictions has become a massive workstream at companies, but determining which requests are appropriate and which are not seems to be less of a priority. Gag orders, the mass layoffs of trust-and-safety staff, and the sheer volume of requests are pushing companies toward a compliance-at-scale approach rather than attempting to discern when a government request should be denied.

Apart from reducing costs, giving the government what it asks for may be politically expedient. Governments’ arsenal of influence over companies’ decisions goes beyond formal takedown requests. Content moderation policies and processes are increasingly becoming a bargaining chip that companies and governments are using to avoid future regulation, minimize liability, and generally keep business moving.

Talking to former platform trust-and-safety staff and moderators, we found that governments are influencing moderation in multiple ways. First, teams are modifying policies to hew more closely to government policies and preferences. Second, moderators are taking down speech proactively to preempt government requests entirely, including by taking down legally permissible speech that moderators think would be controversial. Other research has shown that moderators also rely on flags sent in by law enforcement bodies such as the opaquely named internal referral units that flag content violating private terms of service. Finally, companies invoke “break-glass measures” to supercharge moderation when governments exert extra pressure or threaten to shut down the platform. These informal methods are generally ad hoc and undocumented, sometimes with policy team members left in the dark, too.

Companies’ desire to appease and acquiesce to governments’ interests is not new, even if it feels more apparent than ever, but their increasing lack of transparency is. Ten years ago, CEOs touted their platforms’ role during the Arab Spring, aimed at toppling repressive government control, and broadly committed (even if only rhetorically) to human rights principles. It was common for platforms to push back publicly on governments’ content restrictions and publish requests they received on the Lumen Database, enabling the public and users to understand what their government was doing and promoting transparency about whether laws serve genuine safety or political agendas. Now, companies directly facilitate government action by removing posts and handing over user data and, according to our interviews, often without notifying the public or affected users.

Increasingly, users accuse social media companies of being arms of the government and preemptively taking down more content than even governments want at the cost of undermining social movements. Users also feel constantly monitored and second-guess why they use these social media services in the first place, particularly for speaking about political issues. In response, users develop circumvention tactics to evade scrutiny by cropping images, flipping letters, or using “algospeak” to avoid terms suspected to be shadowbanned, (for instance, talking about U.S. President Donald Trump’s immigration actions by using terms like “cute winter boots”), making good-faith moderation even harder.

By allowing and even encouraging further government capture, companies damage their own reputations, which can have repercussions, financially and otherwise. Predictable, trustworthy, and rights-respecting moderation processes have been linked with increased revenue for online services hosting user-generated speech; likewise, as X learned when it changed its policies after a change in ownership, erratic moderation can drive away users and advertisers. At the same time, investors are directly demanding greater accountability and respect for international human rights standards at annual shareholder gatherings.

Moreover, companies’ surrender to government demands can conflict with their obligations under the U.N. Guiding Principles on Business and Human Rights and their own stated commitments to human rights and user safety. With increased informal and indirect expansion of government scrutiny online and fewer spaces to speak outside the influence of governments, users are losing their ability to speak freely. What’s more, users lose the ability to exercise their right to seek remedy from their governments when they don’t know that government actors are the cause of their content being removed.

Companies can do more to earn users’ trust back.

First, they can consistently publish government orders where possible or ask that the government agency make it public, consistent with their human rights obligations. Academics, human rights advocates, and experts all rely on these notices to understand the breadth of influence both social media companies and governments wield on our information environment.

Second, they can notify users when their speech is moderated due to a government order or pressure. Users are often left in the dark and don’t know when their posts have been taken down due to a government order or scrutiny. That breeds conspiratorial thinking.

Finally, they can push back when orders from governments violate human rights. This is something most large social media companies have done before. Existing frameworks created by the United Nations and the Global Network Initiative enable trust-and-safety teams to weigh the trade-offs when facing demands.


Beyond individual government requests, companies should communicate with users more to assuage concerns of government scrutiny. Equipping users with the knowledge of what norms are expected online, how rules change, and who is scrutinizing their speech enables users to make more informed decisions about their online safety.

Ultimately, as companies and governments further back away from previously agreed-upon commitments to people’s liberties and rights, they are forgetting the users who make their platforms viable and successful. Companies should be on the side of users instead of doing governments’ bidding, particularly as governments themselves backslide on their commitment to protect their own. As one Tamil content moderator told us: “The platforms taking you down is actually a matter of inconvenience. The real danger is the democratic climate of the country.”




Aliya Bhatia is a senior policy analyst at the Center for Democracy & Technology.

Mona Elswah is a project fellow at the Center for Democracy & Technology.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber?
.

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy


  • Indian Air Force personnel stand in front of a Rafale fighter jet during a military aviation exhibition at the Yelahanka Air Force Station in Bengaluru.
    Indian Air Force personnel stand in front of a Rafale fighter jet during a military aviation exhibition at the Yelahanka Air Force Station in Bengaluru.

    A Tale of Four Fighter Jets This article has an audio recording

    The aircraft India and Pakistan use to strike each other tell a story of key geopolitical shifts. This article has an audio recording


  • A cardinal in a black robe with red sash with hands folded in front of him walks past a stage and steps.
    A cardinal in a black robe with red sash with hands folded in front of him walks past a stage and steps.

    Conclave Sends Message With American Pope

    Some cardinals had been agitating for U.S. leadership to counter Trump.


  • An illustration shows red tape lines crossing over and entrapping a semiconductor chip.
    An illustration shows red tape lines crossing over and entrapping a semiconductor chip.

    Is It Too Late to Slow China’s AI Development? This article has an audio recording

    The U.S. has been trying to keep its technological lead through export restrictions, but China is closing the gap. This article has an audio recording


  • A man watches a news program about Chinese military drills surrounding Taiwan, on a giant screen outside a shopping mall in Beijing on Oct. 14, 2024.
    A man watches a news program about Chinese military drills surrounding Taiwan, on a giant screen outside a shopping mall in Beijing on Oct. 14, 2024.

    The Pentagon Fixates on War Over Taiwan

    While U.S. military leaders fret about China, Trump has dismissed the Asia-Pacific.