Skip to Content

CDT Research, Free Expression

From Our Fellows: The Use of Mis- and Disinformation in Online Harassment Campaigns

This post is authored by Caroline Sinders, Non-resident Fellow at CDT and Founder of Convocation Design + Research.

Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT. 

Misinformation and disinformation have multifold and intersectional harms: they can mislead constituents by perpetuating stereotypes and political falsehoods, including those targeting specific marginalized groups, that can quickly jump from the online to the offline and result in real-world harm and violence. But these harms and the tactics that create them are not specific to misinformation and disinformation campaigns, and exist in other domains of internet research including online harassment and harassment campaigns. 

In my research, though, I’ve noticed that, with few exceptions, harassment is often treated as separate from disinformation and misinformation. Researchers, policymakers, academics, platforms, and advocates acknowledge that all of the above are harmful, but do not necessarily analyze or label them as the interlocking and overlapping issues that they are. Understanding the relationship between misinformation and harassment can lead to better understanding of harassment, more useful and grounded policies against it, and ultimately, faster and more effective ways to respond.

Online harassment, disinformation, and misinformation are distinct strategies, but they have shared goals: to pollute information ecosystems, and to discredit and silence journalists, writers, artists, dissidents, activists, and others. Online harassment disproportionately targets women, people of color, and members of the LGBTQIA+ community, resulting in threats, online and offline harms, and the chilling or silencing of the expression of these groups who are already marginalized in public discourse. These same targets are also targets of coordinated disinformation and misinformation campaigns; it just happens that sometimes those campaigns are labeled “harassment” instead of mis- and disinformation. In a recent example, 4chan falsely accused a trans woman of being the Uvalde shooter simply because she was trans, resulting in a cacophony of online harassment against her that was amplified by online personalities with large followings.

As I argued in 2018, the rise of misinformation, disinformation, and digital violence comes from the same far-right spaces — on platforms like Twitter, 4chan, 8chan, Reddit, and Discord —  that launch online harassment campaigns and organized violence. Researcher Katherine Cross has noted that coordination of the 2017 hashtag #CNNBlackmail pulled directly from the playbook of #Gamergate, a harassment campaign started in 2014 targeting women and marginalized groups in video games. As they did in Gamergate, Twitter threads under the #CNNBlackmail hashtag featured anti-semitic memes and rhetoric, violence, and doxxing of innocent targets — in this instance, CNN employees and their families. 

As Cross further argued, Gamergate and the campaigns that preceded it established the social norms, vocabulary, and ideals of the alt-right. Operation Lollipop, for instance, was a coordinated harassment campaign involving impersonation, designed to create divisions between feminists online. Likewise, in the 2014 #EndFathersDay campaign, 4Channers impersonated BIPOC feminists with the goals of creating infighting amongst feminists on Twitter and cultivating harassment towards people epousing femnist ideals. Researchers and activists Shafiqah Hudson and I’Nasah Crockett created and used the #YourSlipIsShowing hashtag to document the harassment and impersonations created by Operation Lollipop. 

While impersonation is a common harassment tactic, so are misinformation and disinformation. Gamergate involved numerous coordinated misinformation and disinformation campaigns, which included deploying doctored images and fabricated content against individuals, companies, and groups to create networked, planned harm. One of these campaigns was “Operation Disrespectful Nod,” created in 2014 by Gamergaters to convince Intel to pull advertising from the games blog Gamasutra. Gamergaters first found an email contact for an employee at Intel, then other campaign participants used a subreddit to strategize about how to email the contact. The emails they drafted and sent contained false claims that a Gamasutra editor had insulted gamers; the tactic worked temporarily. Gamergaters used similar strategies in 2016 to spread misinformation about targets — myself included — who submitted panels to SXSW; that campaign eventually led to violent threats against the panelists and initial cancellation of the panels. 

Gamergate members also frequently create doctored images or deepfakes of victims, with the goal of continuing or reinforcing a false narrative around them. Sometimes, this means placing the victim’s head on top of pornographic images to support claims that the victim was a sex worker, or traded sexual acts with journalists for games coverage. In another example, Gamergaters doctored images of Veerender Jubbal and created a viral tweet that portrayed him as one of the 2015 Paris Terrorists, preying on Western racism against brown people. This disinformation campaign led newspapers across Europe to share this image of Jubbal and name him as one of the terrorists, potentially endangering his life and resulting in death threats against him. The campaign against Jubbal was clearly multidimensional, and could be understood as utilizing misinformation, disinformation, and harassment.  

Some coordinated hate speech attacks have been rightfully categorized as misinformation, but those with specific targets — such as those on Facebook aimed at Ilhan Omar and Rashida Tlaib — are not dissimilar from the fabrications Jubbal faced, or the kinds of targeting Gamergate engaged in. Harassment and hate speech, and how they manifest inside of planned and coordinated digital content, have more similarities than differences. 

Clearly, false information, specific targeted disinformation, and networked coordination of harassment campaigns can result in a perfect storm of disinformation spreading without context and warping into viral misinformation campaigns. Clarifying how misinformation, disinformation, and harassment come together to create this pattern can help us develop new ways to understand, research, and respond to harassment, and in turn affect policy and aid in the creation of faster responses to growing harassment campaigns. Forensic investigations, which help piece together what happened, to whom, by whom, and across which platforms, already form the backbone of research into disinformation and misinformation, but are rarely deployed to examine harassment campaigns. Applying the discourse around — and methodologies of — misinformation and disinformation research to harassment campaigns will create a fuller picture of networked, digital harm, and how to respond and mitigate that harm as it affects real people.