It’s almost a cliché at this point: any set of policy recommendations will include a call for improved transparency. Whether we’re talking about government surveillance reform, understanding how companies use your data, or creating technical standards that help to signal when censorship occurs, CDT routinely advocates for increased transparency from the governments and companies implementing policies that affect everyone’s rights. We’ve seen real progress as company transparency reporting develops into a best practice for the internet and telecommunications industry, with dozens of major companies regularly reporting on the demands they receive from governments around the world for user data and content restriction.
But transparency isn’t an end in itself. Rather, it’s a crucial vehicle for understanding the forces that shape our online experiences. Twitter’s latest report provides a good example of just how much we can learn from company transparency reports. Twitter breaks ground by publishing new data about the complex interactions that social media companies can have with governments who are seeking to restrict content online. In this post, we dig into the report and discuss what it reveals about the mounting pressure from governments that intermediaries face to censor user speech.
Multiple vectors for government demands
Twitter’s transparency report now covers four different types of interaction with government processes, each of which could lead to limits on or removal of content or accounts from the site:
Official legal process, considered as matter of law
This is the paradigm case of government demands for content restriction online: Law enforcement obtains a court order declaring certain content or activity illegal and serves the order on a company in its jurisdiction; the company considers whether the order is valid and whether to limit access to the content in that country.
On Twitter’s report, this information appears in the column “Removal requests (court orders)”. According to the latest data, the overwhelming majority of court orders that Twitter received came from Turkey, which generated 844 of the 884 total court orders Twitter received in the past 6 months. Content was withheld in response to 19 percent of these and other official legal orders in Turkey.
Official legal process may also include requests that are not court orders but that are issued following some formal procedure; Twitter reports these figures in “Removal requests (government agency, police, other).” The report reveals that Twitter receives more than five times as many of these administrative orders as they do court orders, and that the vast majority of countries send the company many more administrative orders than court orders. This comparison is useful information for advocates and users, as it can help them identify the most common mechanisms their governments use to pursue restrictions on speech.
Official legal process, considered under Terms of Service
As of its January-June 2016 report, Twitter also began breaking out the number of official legal requests that it responded to not as a matter of law, but as a Terms of Service violation. This information is reported in the column “Accounts (TOS)” and provides a more nuanced view into how official legal orders may result in content removal.
For example, in the most recent report, Twitter discloses that of 522 official removal requests in Russia, content was withheld in only 28 percent of cases that dealt with the request as a matter of law (withholding 55 accounts and 89 individual tweets). But the “Accounts (TOS)” data indicates that an additional 282 accounts were affected when Twitter reviewed an official legal request and determined the content or account violated its Terms of Service. This means that about 80 percent of the time, when Russian officials notified Twitter of allegedly unlawful content, some of that content came down. This type of information is useful for understanding how governments may succeed in seeing speech removed from a site, even if the government lacks jurisdiction or the speech doesn’t violate the law.
Report from NGO considered as matter of law
This is a new category on Twitter’s report, listed as “EU Trusted Reporters” in the text beneath the table of data. This section provides information on instances where Twitter receives notification from an affiliated European NGO that a tweet or account violates local laws against hate speech. This section appears to be related to Twitter’s participation in the EU’s Code of Conduct on Countering Illegal Hate Speech Online, under which Twitter, YouTube, Facebook, and Microsoft agreed with the European Commission “to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.”
CDT has raised significant concerns about the EU Code of Conduct, which bypasses the judiciary in making determinations of illegal speech and includes no accountability or remedy mechanisms as part of either the companies’ or government’s commitments. While Twitter’s latest report can’t alleviate those concerns, it still sheds some important light on this program and is a welcome glimmer of transparency from one of the companies participating in the Code. Twitter’s report discloses the total number of reports received from these Trusted Reporters (1,434), and indicates that the bulk of these were from French organizations (1,219). In response to the French reports, Twitter reports withholding content in 301 cases and suspending 104 accounts. Importantly, Twitter notes that “86% of the overall French Trusted Reporters requests we received did not specify the reason for removal.”
While the European Commission has criticized the participating companies for not removing 100 percent of content reported to them through this program, it’s important for Twitter and other companies to push back against demands to take down content that provide no rationale. Overall, CDT thinks the EU Code of Conduct creates a troubling situation where companies are put in the role of adjudicating Member States’ laws at the behest of NGOs – not a system that is built to provide the due process protections that people are entitled to when government seeks to restrict speech. As the public debate over this policy approach continues, it is essential that we have more information from the participating companies about how this Code is operating in practice.
Government request considered under Terms of Service
Another new category on Twitter’s report, the “Government terms of service reports” section, includes information about instances when government officials contact Twitter and ask the company to remove content under Twitter’s own TOS – specifically, Twitter’s terms against promotion of terrorism. These referrals are not court orders or other legal requests and come to Twitter through their “standard customer support intake channels.” In other words, these referrals involve government representatives flagging content as regular users might.
Twitter reports that 5,929 accounts were reported in this way (from 716 reports), and in 85 percent of cases Twitter took some action against these accounts. Twitter notes that they are planning to expand coverage of these extra-legal requests from governments in the future. It’s also important to note that company reporting on this kind of government request is dependent on governments identifying themselves to companies as they use customer-facing reporting mechanisms. Ultimately, it is the government itself that can provide comprehensive data about its use of these informal reporting channels.
Though Twitter does not yet disclose which governments are sending such referrals, these requests are likely arising under Internet Referral Unit programs in Europe, including the UK’s Counter-Terrorism Internet Referral Unit and Europol’s IRU program. This data provides useful context for understanding how these programs function. Twitter reports that these government referrals account for less than two percent of account takedowns for promotion of terrorism, and that 74 percent of the nearly 337,000 accounts suspended on those grounds in this six-month period were detected by Twitter’s “internal, proprietary spam-fighting tools.”
It’s also clear from Twitter’s data, however, that informal government referrals comprise a large proportion of the accounts Twitter reviews at the behest of government officials. This extra-legal flagging by governments targeted nearly 6,000 accounts in the six-month reporting period. In that same span, the total number of accounts reported through official legal requests, across 45 countries in this 6-month period, was about 4,600. (The total number of accounts reported in this period was 13,022, but the Turkish government was responsible for a disproportionately high number of those, making legal requests that affected 8,400 accounts.) In other words, governments targeted more accounts through informal flagging than governments (other than Turkey) targeted through official channels.
Without a breakdown by country of the TOS reports figure from Twitter, we can’t compare individual countries’ patterns in making legal requests versus leveraging the company’s TOS reporting system. But this aggregate data demonstrates the chief concern that CDT and other human rights advocates have with government use of companies’ TOS to seek removal of speech from the web: TOS flagging is quicker and easier for government officials than following formal procedures that involve independent oversight and accountability, such as challenging a particular post or account under national law. As the less time- and resource-intensive approach, TOS flagging will hold ample appeal to governments looking to restrict certain speech and speakers online.
But government censorship isn’t supposed to be frictionless. Due process protections, including review by an independent arbiter and the opportunity for the affected speaker to appeal the judgment, are crucial oversight mechanisms that keep governments accountable and ensure that officials cannot censor protected speech. Moreover, national law that restricts speech must meet certain substantive standards, including that a restriction must have a specific legitimate aim and cannot be vague or overbroad. When governments engage in TOS flagging, however, they supplant the standards in national law with a private company’s TOS, which can restrict speech far beyond what a government could permissibly limit.
Transparency supports informed public debate (which is why we always recommend it)
Government efforts to pursue content takedown are a fundamental concern for freedom of expression online. Transparency reports from internet companies provide much-needed information to fuel public debate. Twitter’s expanded report now includes crucial nuance that will aid advocates, journalists, and users in understanding the interplay between governments and companies over restricting content. We urge other internet companies to include similar information in their reports.
In addition, we continue to urge internet companies to release data about their TOS enforcement actions. Twitter’s latest report on government use of TOS provides a glimpse of how TOS operate to affect the speech available on the site, but in all likelihood TOS enforcement with no government nexus accounts for much more content removal. Compare, for example, reports that Facebook receives one million reports of TOS violations a day with the data from their most recent transparency report, which disclosed that about 9,600 items were restricted following government requests. If even one percent of TOS reports to Facebook result in takedown (the number is likely far higher), then the company removes more posts in a day as TOS violations than it does from six months’ worth of government requests worldwide. As governments leverage TOS enforcement processes and amp up the pressure on companies to censor more speech faster, it’s vital that we all have a better understanding of the underlying context for speech online.