{"id":79224,"date":"2016-12-06T11:14:06","date_gmt":"2016-12-06T16:14:06","guid":{"rendered":"https:\/\/cdt.org\/?post_type=blog&p=79224"},"modified":"2016-12-06T12:39:20","modified_gmt":"2016-12-06T17:39:20","slug":"takedown-collaboration-by-private-companies-creates-troubling-precedent","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/takedown-collaboration-by-private-companies-creates-troubling-precedent\/","title":{"rendered":"Takedown Collaboration by Private Companies Creates Troubling Precedent"},"content":{"rendered":"

Yesterday, Facebook, Microsoft, Twitter, and YouTube announced their intent to begin collaborating on the removal of terrorist propaganda across their services<\/a>. CDT is deeply concerned that this joint project will create a precedent for cross-site censorship and will become a target for governments and private actors seeking to suppress speech across the web.<\/p>\n

Governments have a legitimate objective in preventing the commission of terrorist acts, but security concerns can also motivate policies that jeopardize fundamental rights. Yesterday\u2019s announcement comes after several years of demands<\/a> from governments in the EU, the US, and around the world for these companies to do more<\/a> to stop the spread of messages from terrorist organizations that seek to recruit individuals and inspire violent acts.<\/p>\n

CDT is deeply concerned that this joint project will create a precedent for cross-site censorship<\/p><\/blockquote>\n

Under this intense pressure<\/a>, these companies have started a dangerous slide down the slippery slope to centralized censorship of speech online. Below, we describe what is known about this collaboration and discuss the significant risks this approach poses to free expression online, given the complex nature of information about terrorist activity and the dominant role of these companies in the online environment. We offer preliminary recommendations for transparency, remedy, and accountability that must be incorporated if the companies do move forward with this proposal, and urge these companies, independent service providers, and governments to reject the trend toward centralized censorship for the Internet.<\/p>\n

Notice and Takedown and Takedown and Takedown and . . . <\/strong><\/p>\n

Four leading US-based internet companies \u2013 Facebook, Microsoft, Twitter, and YouTube \u2013 plan to begin sharing information about images and video depicting \u201cviolent terrorist imagery\u201d in a centralized database, in order to facilitate the removal of this material from their services.\u00a0 As described in yesterday\u2019s announcement<\/a>, if a participating company identifies an image or video that violates its Terms of Service against \u201ccontent that promotes terrorism\u201d, that company can submit the hash, or digital fingerprint, of that file to a database.\u00a0 Other participating companies will then be able to scan for matches to this hash among the files that are already on, or are newly uploaded to, their servers. If a participating company finds a match to one of these hashes, the company will \u201cindependently determine\u201d whether that file violates its own Terms of Service.<\/p>\n

The announcement underscores that this system is voluntary: participating companies retain discretion over whether to submit hashes to the database, and whether to remove any matching files from their own services. The announcement states that \u201cmatching content will not be automatically removed.\u201d\u00a0 This is an important distinction: an automated takedown system, where matching files were immediately removed across all platforms without any additional review, would be extraordinarily vulnerable to abuse and to over-blocking lawful speech. Automated takedown would ensure that any removal would propagate across all participating platforms rapidly, with no consideration of context or opportunity to catch mistakes.<\/p>\n

It appears that this database of hashes is intended to operate more as a centralized mechanism for notice to participating companies, flagging content for their review, and sending the signal that one of their peers found the content to be \u201cthe most extreme and egregious terrorist images and videos\u201d.\u00a0 Even without automated takedown, however, this centralized system will create some risks to free expression.<\/p>\n

Centralized database creates target for government censorship efforts <\/strong><\/p>\n

The agreement focuses on collaboration among the participating companies and mentions government only in noting that \u201ceach company will continue to apply its practice of transparency and review for any government requests\u201d. But these companies will undoubtedly face substantial pressure from governments and private actors across the globe to expand the scope of the database and include additional content in it.\u00a0 Moreover, the existence of this coordination agreement will almost certainly embolden governments to demand that Internet companies collaborate on censorship efforts for everything from hate speech to alleged copyright infringement.<\/p>\n

It would not be difficult to imagine, for example, the EU proposing this kind of coordinated filtering scheme as a way to implement the dangerous monitoring proposals being pushed in the Copyright<\/a> and AVMS Directives<\/a>. While the participating companies may not have changed any of their existing Terms of Service or moderation standards under yesterday\u2019s agreement, this hash database represents a new point of centralized control that governments and others will seek to exploit.<\/p>\n

This database itself appears to be modeled after the National Center for Missing and Exploited Children\u2019s hash database of images that appear to be child pornography, which these and other companies use to block images from their services using Microsoft\u2019s PhotoDNA product. While that system has its own weaknesses \u2013 service providers are required by federal law to report images to NCMEC but the majority of these images are never adjudicated to be illegal by a court \u2013 child abuse imagery is distinct from \u201cterrorist content\u201d in that there is no context in which the publication or distribution of child pornography is considered lawful.<\/p>\n

Lack of clear definition of what speech will be targeted creates opportunity for scope creep<\/strong><\/p>\n

There is no internationally agreed upon definition of terrorist propaganda. Companies have the freedom to be more restrictive of speech than what the government could prohibit\u2014it\u2019s highly likely that much of the content in this database will be lawful speech in the US\u2014and they have developed idiosyncratic definitions of \u201ccontent intended to recruit for terrorist organizations<\/a>\u201d, \u201cdangerous organizations<\/a>\u201d, and \u201cviolent threats (direct or indirect)<\/a>\u201d. \u00a0The announcement emphasizes that \u201c[e]ach company will continue to apply its own policies and definitions of terrorist content\u201d. While it would be troubling for the participating companies to converge on a lowest common denominator, most-restrictive definition of \u201cterrorist content\u201d, the lack of clear parameters for what material may end up in this database creates its own risks.<\/p>\n

Without a bright line denoting what can \u2013 and cannot \u2013 be submitted to the database, the terms of the agreement are vulnerable to mission creep. Participating companies will face external and internal pressure to include disturbing, graphic, and violent content of many kinds in this database. A clear definition that sets a very high bar for inclusion of images and video in the database would create a bulwark against the inevitable onslaught of proposals for other content to be included.<\/p>\n

Incentives are stacked in favor of takedown<\/strong><\/p>\n

The agreement describes the participating companies\u2019 intention to continue conducting their own review of any material brought to their notice under this collaborative system. It\u2019s important for each company to engage in careful review of this content, since they each have different content policies and different standards for newsworthiness or other exceptions. It\u2019s entirely possible that one company would decide to remove an image (posted alone in a tweet, for example), and that another company would decide to leave up (as a featured image in a news article or otherwise contextualized). Different services have access to different degrees of context about their users and the material they post.<\/p>\n

The chance that this database becomes anything other than a repository of material prohibited across all participating services seems razor thin.<\/p><\/blockquote>\n

However, the existence of this centralized database, and the fact that another leading internet company has declared an image or video to be \u201cthe most extreme and egregious\u201d content, could create a legal expectation that all participating companies are on notice of allegedly illegal content that may appear on their sites.\u00a0 At the very least, it will create a normative presumption against the sharing of this material, which the agreement describes as \u201ccontent most likely to violate all of our respective companies’ content policies\u201d, and could require participating companies to spend time justifying why they are hosting terrorism-related material that their peers have rejected.<\/p>\n

Moreover, if the participating companies move forward with the plans to \u201cinvolve additional companies in the future\u201d, it seems likely that smaller companies, with fewer resources to spare on in-depth qualitative content moderation, will respond to any hash in the database by simply blocking that content.\u00a0 It could be difficult for a small company to justify, as a financial or reputational matter, a decision to continue hosting \u201cviolent terrorist imagery\u201d, even if there are eminently defensible reasons to do so.\u00a0 But this is the danger at the core of a system designed to expedite takedown of content across platforms.\u00a0 The chance that this database becomes anything other than a repository of material prohibited across all participating services seems razor thin.<\/p>\n

Uncertain plans for transparency, remedy, and accountability raise more questions <\/strong><\/p>\n

In the announcement, the companies refer to their existing processes for transparency reporting and appealing content-takedown decisions, and note that they \u201cseek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way\u201d. Accountability is a crucial component of any content moderation process<\/a> and it\u2019s important that the companies have these independent processes in place.\u00a0 But this unprecedented system for coordinating takedowns raises fundamental questions about what transparency and access to remedy should look like across these services.<\/p>\n

It\u2019s not clear what information the participating companies plan to make public about the collaboration project or their independent activity. Will participating companies publish reports about the material they take down based on matches to this hash database?\u00a0 Currently, none of the participating companies publishes regular reports on their Terms of Service enforcement activity.\u00a0 If a company is alerted to a propaganda video by a government actor (e.g., through an Internet Referral Unit<\/a>), will that company submit the video\u2019s hash to the database (and thereby extend the government actor\u2019s influence to other companies)?\u00a0 What sort of information will be provided about the database itself? How will the companies provide information about what is and isn\u2019t included?<\/p>\n

Regarding remedy, if a user\u2019s image or video is taken down across many platforms in a short time, will she know whether this was the result of hash-sharing amongst the companies?\u00a0 Will she be able to submit an appeal to the centralized function, or will she have to petition each participating company individually? (Not all participating companies offer post-level appeals of takedown decisions. For example, Facebook only allows users to appeal the removal of a profile or page<\/a>.)\u00a0 If a company decides that it mistakenly removed a post and submitted it to the database, will it be able to signal to the other participating companies that they should re-evaluate their decisions, too?<\/p>\n

Recommendations<\/strong><\/p>\n

We have just begun to analyze this program and its potential effect on free expression online, but offer the following preliminary recommendations to respond to some of the worst risks:<\/p>\n

To avert government pressure or co-optation of the database: <\/em><\/p>\n