Social media platforms have become ubiquitous in modern economies. As of 2023, there were more than five billion active social media users worldwide, representing over 60% of the world population. In the US, the average user spent 10.5% of their lives on these services (Kemp 2024). Partially due to the increasing share of time that users spend on social media, policymakers have raised concerns that these platforms can influence political attitudes and electoral outcomes (Fujiwara et al. 2020), lead to significant mental health and addiction challenges (Braghieri et al. 2022), and expose consumers to misinformation and toxic content (Jiménez-Durán et al. 2022). In addition, the dominant social media platforms have considerable market power; as such, it is not clear that market competition can help resolve these policy concerns. Regulators in the EU have implemented several policies to deal with these issues – such as the Digital Markets Act (DMA), the General Data Protection Regulation (GDPR), and the Digital Services Act (DSA) – and governments around the world are actively drafting legislation to address them.
How can the research community provide evidence to help guide the design of such regulations? One option is to empirically evaluate policies after they have been implemented – as has been the case for the EU’s GDPR and DMA, Apple’s ATT, and the German NetzDG Law – which can help guide future amendments to these regulations as well as their adoption in other jurisdictions (Jiménez-Durán et al. 2022, Aridor et. al 2024, Johnson 2024, Pape et. al 2025). This provides policymakers with meaningful evidence only after years of implementation and only by evaluating policies that were actually implemented, not counterfactual policies that were considered.
Another option is to have platforms explicitly conduct experiments simulating the effects of proposed policy interventions (Guess et. al 2023, Nyhan et. al 2023, Wernerfelt et. al 2025). This option comes with its own set of challenges, as it provides platforms with outsized influence on the type of questions and interventions that can be studied, as the platforms are not impartial agents in the policy debate (Hendrix et. al 2023a, b).
In a forthcoming chapter in the Handbook of Experimental Methods in the Social Sciences (Aridor et al. 2025), we provide a practical guide to a third option that exploits how third-party technologies and platform features can be used for researcher-generated experimental variation. Our method combines the best of the two aforementioned options: it is accessible to researchers without requiring explicit platform cooperation, and it allows for counterfactual policy evaluation before the implementation of a chosen policy. Our paper provides detailed documentation for running such experiments: starting from using social media platforms for recruitment of experimental subjects, documenting how to use a combination of platform features and technologies such as Chrome Extensions and mobile apps to collect data and generate experimental variation, and concluding with considerations and limitations to such experiments. Overall, this methodology serves as a powerful toolkit to study policy issues not only on social media platforms, but also on platforms such as Amazon (Farronato et. al 2024), Google Search (Allcott et. al 2025), and YouTube (Aridor forthcoming). We document several experiments that we conducted and explain how they relate to policy challenges.
First, how does social media impact political attitudes? Levy (2021) tackles this question by directly recruiting a set of participants from Facebook and exploiting the structure of the news feed, which sourced content from the pages users followed. Levy randomly nudges participants to follow conservative or liberal news outlets on Facebook, leading to experimental variation in the type of content that shows up in participants’ news feeds. This treatment allows Levy to quantify the causal effect of changes to the news feed algorithm on downstream outcomes, including news consumption, political opinions, and affective polarisation.
Second, is social media addictive? What does addiction imply for the large market power of these platforms? Like Allcott et al. (2022), our work (Aridor forthcoming) tackles these questions by inducing exogenous variation in social media use via third-party mobile applications. Importantly, unlike many other markets studied by economists, having individuals install this software allows for personalised variation. This feature allows Allcott et al. (2022) to quantify the role of self-control problems in driving usage, and allows us to characterise second-choice diversion ratios between social media applications as well as quantifying the role of consumer inertia in usage.
Third, how does the hateful and toxic content encountered by consumers on such platforms impact overall engagement, and do platforms have an incentive to remove this content? Beknazar et al. (2024) conduct a browser experiment with social media users – randomly decreasing their exposure to toxic content on Facebook, Twitter, and YouTube – allowing them to quantify the impact of toxic content on the time users spend on social media, their engagement with ads, and the toxicity of content that participants subsequently produce.
These examples illustrate the type of variation that research can generate and the data they can collect. In addition to contributing to ongoing policy debates, we argue that social media can be used to answer a wide variety of questions in economics that are not directly related to digital platforms. For example, social media can be used to recruit specific participants for an experiment (Trachtman 2024), interventions on social media can be used to study the effect of political campaigns (Enrquez et al. 2024), and behaviour on social media can be used as an outcome for studies on discrimination (Ajzenman et al. 2023, Angeli and Lowe 2023).
While social media experiments serve as a powerful tool, there are important limitations. First, the effect sizes of interventions are often small, requiring large samples or within-subject designs to detect a meaningful impact. At the same time, these experiments often lack the scale typically associated with large-scale policy changes or platform-level experiments. Yet, longitudinal approaches may suffer from attrition (such as participants closing their social media accounts) while noncompliance (like reactivating accounts during a deactivation study) can further bias results. Another common challenge is that interference across users can violate the Stable Unit Treatment Value Assumption (SUTVA), and both user and algorithmic responses may adapt in equilibrium, complicating interpretation. Finally, ethical considerations often constrain study design and limit replicability. We discuss these limitations and ways to deal with them in our chapter.
The handbook chapter complements our review of the economic literature on social media (Aridor et al. 2024). It provides a methodological guide for exploring fundamental economic questions and generating insights relevant to policy discussions.
References
Allcott, H, J C Castillo, M Gentzkow, L Musolff and T Salz (2025), “Sources of market power in web search: Evidence from a field experiment”, NBER No. w33410.
Allcott, H, M Gentzkow and L Song (2022), “Digital addiction”, American Economic Review 112(7): 2424–63.
Angeli, D, M Lowe et al. (2023), “Virtue signals”, CESifo Working Paper No. 10475.
Ajzenman, N, B Ferman and P C Sant’Anna (forthcoming), “Discrimination in the formation of academic networks: A field experiment on #EconTwitter”, American Economic Review: Insights.
Aridor, G (forthcoming), “Measuring substitution patterns in the attention economy: An experimental approach”, RAND Journal of Economics.
Aridor, G, C Yeon-Koo, B Hollenbeck, D McCarthy and M Kaiser (2024), “Evaluating the impact of privacy regulation on e-commerce firms: Evidence from Apple’s app tracking transparency”.
Aridor, G, R Jiménez Durán, R E Levy and L Song (2024), “The Economics of Social Media”, VoxEU.org, 20 May.
Aridor, G, R Jiménez Durán, R E Levy and L Song (2025), “Experiments on social media”, Handbook of Experimental Methods in the Social Sciences, Edward Elgar Publishing.
Beknazar-Yuzbashev, G, R Jiménez-Durán, J McCrosky and M Stalinski (2025), “Toxic content and user engagement on social media: Evidence from a field experiment”, CESifo Working Paper No. 11644.
Braghieri, L, R Levy and A Makarin (2022), “Social media and mental health”, VoxEU.org, 22 July.
Enríquez, J R, H Larreguy, J Marshall and A Simpser (2024), “Mass political information on social media: Facebook ads, electorate saturation, and electoral accountability in Mexico”, Journal of the European Economic Association 22(4): 1678–722.
Fujiwara, T, K Müller and C Schwarz (2020), “The effect of social media on elections: Evidence from the United States”, VoxEU.org, 30 October.
Farronato, C, A Fradkin and A McKay (2024), “Platform Vertical Integration and Consumer Choice: Evidence from a Field Experiment”, working paper.
Johnson, G (2024), “Economic research on privacy regulation: Lessons from the GDPR and beyond”, The Economics of Privacy, University of Chicago Press.
Guess, A M, N Malhotra, J Pan, P Barberá, H Allcott, T Brown and J A Tucker (2023), “How do social media feed algorithms affect attitudes and behavior in an election campaign?”, Science 381(6656): 398–404.
Hendrix, J (2023a), The Meta Studies: Nuanced Findings, Corporate Spin, and Media Oversimplification, TechPolicy Press, 2 August.
Hendrix, J and P M Barrett (2023b), Examining the Meta 2020 US Election Research Partnership, TechPolicy Press, 2 August.
Jiménez-Durán, R, K Müller and C Schwarz (2022), “The Effect of Content Moderation on Online and Offline Hate”, VoxEU.org, 23 November.
Kemp, S (2024), Digital 2024: Global overview report, DataReportal, 31 January.
Levy, R E (2021), “Social media, news consumption, and polarization: Evidence from a field experiment”, American Economic Review 111(3): 831–70.
Nyhan, B, J Settle, E Thorson, M Wojcieszak, P Barberá, A Y Chen and J A Tucker (2023), “Like-minded sources on Facebook are prevalent but not polarizing”, Nature 620(7972): 137–44.
Pape, L D and M Rossi (2024), “Is Competition Only One Click Away? The Digital Markets Act Impact on Google Maps”.
Trachtman, H (2024), “Does promoting one healthy behavior detract from others? Evidence from a field experiment”, American Economic Journal: Applied Economics 16(2): 249–77.
Wernerfelt, N, A Tuchman, B T Shapiro and R Moakler (2025), “Estimating the value of offsite tracking data to advertisers: Evidence from meta”, Marketing Science 44(2): 268–86.
This post was originally published on this site be sure to check out more of their content