Shedding Light on Shadowbanning
New CDT Report on Shadowbanning: Everything You Need to Know
Sex workers, conservative bloggers, Black Lives Matter activists, Indian farmers, trans artists, Palestinian protesters, plus-sized influencers — these are just some of the many social media users who believe their online posts are covertly being hidden or taken down, otherwise known as “shadowbanned,” by social media companies.
Are they paranoid? Or is shadowbanning a common practice? Unfortunately, social media platforms are designed in a way that makes it practically impossible for users to know for sure.
Today, CDT is releasing a new report, Shedding Light on Shadowbanning, examining how shadowbanning works on social media, which groups believe they have been shadowbanned, and what effects the popular perception of shadowbanning has on online speech.
We ran a representative survey of over 1,000 U.S. social media users to better understand how many and which people believe they have been shadowbanned on social media. We also held 36 interviews — 13 with users who believed they had been shadowbanned, 10 with members of academia and civil society who work on issues related to shadowbanning, and 13 with employees at social media companies who work on content moderation.
Our survey found that nearly one in ten U.S. social media users believe they have been shadowbanned, and those users are disproportionately male, Republican, Hispanic, or non-cis gendered. The platforms with the largest percentage of their user bases who believed they had been shadowbanned were Facebook (8.1%), followed by Twitter (4.1%), Instagram (3.8%), and TikTok (3.2%). Users most frequently believed they had been shadowbanned for their political views (39%) or their positions on social issues (29%).
In interviews, we found that people who believed they had been shadowbanned felt isolated, particularly because, by the very nature of shadowbanning, they had no way to know for sure whether their content was being surreptitiously moderated or if other users simply did not find it engaging. Some interview respondents also felt gaslit by social media companies’ public denial of shadowbanning, even in the face of their own evidence. Many who believed they had been shadowbanned also believed that platforms conspired against them, often for their beliefs, identity, or profession.
However, interviews with employees at social media companies revealed that, in certain cases, shadowbanning may be necessary to protect the safety and integrity of a platform. Shadowbanning can stop abusive sockpuppet accounts created to circumvent bans, reverse engineering, and other ways users have of finding and exploiting weaknesses in platforms’ content moderation systems. Still, evidence from our interviews and the work of social media scholars suggest that social media companies may shadowban users for the content of what they post, not just for trying to take advantage of their systems.
In order to reduce the harms users experience from shadowbanning and to improve the public’s trust in social media companies’ capacity to moderate content, our report makes the following three recommendations:
- Minimize shadowbanning. Social media companies should only shadowban when it is strictly necessary to protect the safety and integrity of the platform;
- Disclose shadowbanning practices. Social media companies should publicize all the circumstances in which they will moderate a user’s content without informing them; and
- Empower researchers. Social media companies should provide independent researchers with the data they need to verify the shadowbanning practices they claim to experience and to uncover other possible harmful effects shadowbanning may have.
Check out the report’s survey + raw results data here.
Download the list of references for this report in BibTeX (.bib) or in .RIS format. These files can be opened in commonly used reference management software.