Spotlight on Shadowbanning
The range of tools that social media companies have in their content moderation tool belts is expanding. Until fairly recently, when companies wanted to stop abusive content on their platforms, they would remove it. Today though, as social companies are asked to address more and more social ills, they have commensurately expanded their methods of content moderation, particularly with regards to so-called “borderline” content: content that toes the line between what is acceptable and what violates a platform’s terms of service.
These methods are often more surreptitious and opaque than flat out removing content. A Twitter username may stop appearing in a search bar’s autocomplete. A Facebook Page might stop coming up on its followers’ newsfeeds. Certain Instagram hashtags may stop surfacing new posts, and certain conversation topics may disappear from TikTok entirely. Users may notice changes in how their content is performing, but they often aren’t told by the platforms that it’s happening, or if they are, are almost never told why. With no means of recourse or acknowledgment that their content is being treated differently, users can feel as if their posts have been sent to a shadow realm, where no one can see them. Popular culture has dubbed this emerging, opaque form of content moderation shadowbanning.
CDT has long been an advocate for transparency in content moderation. We have worked with tech companies, academics, and lawmakers to promote more transparent content removal practices. As content moderation practices evolve to include more shadowbanning, our recommendations will evolve with them. But in order to respond to this new environment, we must first better understand how these practices work and how they affect society at large. That is why we are beginning a research project on shadowbanning and other new, opaque forms of content moderation. Our work will raise the following questions:
- What shadowbanning and similar opaque practices do social media companies engage in? To make recommendations about opaque content moderation, we have to understand how it works, on platforms large and small.
- Who is affected by shadowbanning? Political conservatives have been some of the loudest voices to complain about opaque content moderation, but organizers, sex workers, and historically marginalized racial and gender identity groups report that they have been affected as well.
- What are the effects of shadowbanning on speech? As with any form of content moderation, we want to understand the chilling effects that shadowbanning and other opaque techniques may have on free speech and how it might shape public discourse.
Content moderation is hard. Platforms have to strike a balance between allowing users to express themselves and building systems that cannot be easily circumvented by those looking to post content that is unlawful or otherwise violates platforms’ rules.
Through our forthcoming workshops and public report on shadowbanning and other opaque forms of content moderation, CDT aims to provide policymakers, researchers, companies, and other civil society organizations with useful research that can help shape these next-generation content moderation techniques for the better.