By Maura Carey
The free and open internet has created unprecedented opportunities for expression, creativity, and connection. It has become a lifeline for human rights activists and dissenters whose speech would otherwise be censored by repressive regimes. But, as a greater share of the world’s speech moves online, governments are ramping up pressure on online intermediaries to regulate user speech and address abuse on their services, which can put the free expression rights of users around the world at risk.
China’s Great Firewall is probably the most extreme example of a government’s crackdown on online speech, but forms of digital censorship also occur in democratic countries like India, member states of the European Union, and the United States. Digital censorship comes in many forms and can be an indirect consequence of well-intentioned regulations, which must strike a delicate balance between protecting users’ free speech rights and mitigating the harms of online hate speech, terrorist content, and misinformation.
Free expression issues are inextricably entwined with US global trade policy. Governments seeking to suppress online speech tend to target online intermediaries rather than individual speakers because providers are easier to identify and control. Many of the most prominent online content hosts are based in the United States. These providers make decisions about whether and how to operate their services in overseas markets based on information about the legal and regulatory environment. Harsh intermediary liability laws and strict regulations on online expression create barriers to trade and investment in these overseas markets for U.S.-based companies. Governments across the world are also seeking to expand jurisdiction over online service providers in order to force them to comply with requests to take down speech, contributing to a climate that makes it difficult for US companies to invest abroad. Conversely, companies that have already invested in a market can face significant pressure to censor from these same laws.
The U.S. International Trade Commission (USITC) recently launched an investigation, at the request of members of the Senate Finance Committee, into the trade and economic effects of foreign digital censorship practices. CDT submitted comments to help inform the Commission’s report. Though the USITC investigation was launched with a particular focus on digital censorship practices in China, CDT’s aim was to draw the USITC’s attention to legislative proposals throughout the world that are likely to have a chilling effect on online speech.
In our comments, we discuss four key trends in global regulation that pose threats to online speech:
- Requirements that non-judicial actors make determinations about the legality of speech;
- Government pressure on intermediaries to implement automated content filtering;
- Government officials’ manipulation of private content moderation processes; and
- Mandates to locate data and personnel in-country to increase the government’s leverage over a private company.
The first trend involves proposed changes to intermediary liability frameworks that pressure companies into determining whether speech hosted on their platforms is illegal. In the United States, platforms are generally shielded from liability for user-generated content under 47 U.S.C. §230. Other countries take different approaches to intermediary liability law. One common approach is conditional liability protection, also known as “notice and action” or “notice and takedown.” Under this kind of regime, an intermediary must take action against illegal material once they are notified of its presence, but are otherwise shielded from liability for user generated content. In some countries, however, no such protections exist, and online services can be fined, sued, or otherwise sanctioned for hosting user-generated content that violates local regulations.
Given the importance of a strong and clear intermediary liability framework to promoting online free expression, CDT is concerned about recent legislative proposals, including the draft Digital Services Act in Europe and the 2021 Intermediary Liability Rules in India, that would expand providers’ liability for user generated content. One troubling consequence of such rules is that they force providers to make determinations about whether content is illegal. Under international human rights norms, only independent judicial bodies should be empowered to determine whether an item of speech is illegal. The Indian Intermediary Rules would force providers to review content flagged by users or government agencies and determine whether the content violates local laws. If the providers refuse to comply, they could face strict sanctions, including significant fines and even criminal sentences of up to seven years for the intermediary’s employees. In our comments to the USITC, CDT expressed support for policies that recognize that only an independent judicial authority can determine whether speech is lawful.
The second trend we discuss is the growing pressure on online intermediaries to use automated tools in content moderation. When governments pressure online intermediaries into using various kinds of automated filtering techniques to scan their platforms for content that may violate local laws, they run the risk of suppressing lawful speech. Despite significant progress in these technologies in recent years, machine learning classifiers still struggle to appreciate subtle variations in context that might be obvious to a human and are essential to understanding the meaning of the content. Relying too much on these tools, without an appropriate level of human oversight, will inevitably lead to takedowns of lawful content. Use of automated content moderation techniques may also lead to disproportionate censorship of the voices of members of minority groups if discrimination is baked into the tools’ design.
Governments do not always directly require the use of automated filtering tools. Instead, they often indirectly require the use of these tools by imposing broad content moderation mandates that online intermediaries struggle to meet without using automation in some form. One example of such an indirect requirement is Germany’s Network Enforcement Act (NetzDG), which requires providers to identify and remove “manifestly unlawful content” within 24 hours of receiving a notification about it. The Indian Intermediary Rules contain a similar provision that requires intermediaries to remove certain content on a 36-hour timeline. In practice, these short turnaround times likely cannot be met relying solely on a human workforce to review and take down content. The USITC should encourage US trade negotiators to reject regulations that would impose automated content filtering mandates on U.S. companies. When intermediary liability frameworks are the subject of negotiations, the USITC should also encourage policymakers to evaluate proposals for the risk that they will indirectly impose filtering obligations.
The third trend that CDT describes in our comment is foreign governments’ increased reliance on companies’ terms of service to suppress speech. This kind of censorship occurs when government officials flag or report content to providers on the basis that it violates not the government’s laws but rather the providers’ own terms of service. The Vietnamese government, for example, has engaged in mass reporting campaigns to demand that content posted by journalists and human rights advocates be removed from Facebook and other social media platforms. These tactics allow governments to effectively censor content outside their own borders because platforms’ terms of service generally apply worldwide. Government officials who engage in this kind of censorship argue that they are merely making companies aware of content that violates their terms of service, but these referrals are often done under the threat of retaliation, leaving providers with little choice but to comply with censorship demands. U.S. trade negotiators should discourage trading partners and allies from referring specific content to providers for takedowns. When these referrals do occur, they should be documented in transparency reports.
The fourth and final trend that CDT is concerned about is the rapid adoption of so-called “hostage provisions” in national legal frameworks. Hostage provisions are mandates that companies locate personnel within a country’s borders in order to make it easier for the government to surveil users or suppress speech. Personnel localization laws can be especially pernicious because they allow governments to throw platform employees in jail when they refuse to comply with censorship demands. India, for example, has threatened to imprison employees of Facebook, Twitter, and Whatsapp in retaliation for the platforms’ refusal to take down content associated with Indian farmers’ protests. CDT opposes the use of hostage provisions in national law, and recommends that US trade negotiators seek commitments from trading partners that platform employees will not be imprisoned or punished as retaliation for content moderation decisions.
CDT believes that free expression issues should be a central concern of US international trade policy. Policies that promote free expression abroad create a positive feedback loop that benefits users throughout the world. The reverse is true as well: policies that suppress speech negatively impact the online environment for all users. They can lead to a decline in the quality of services for non-English speakers and cause companies to underinvest in hiring culturally competent staff members. US trade negotiators can play an important role in safeguarding the ability of users all over the world to use the internet to connect, create, and express ideas.
CDT welcomes the opportunity to help guide the USITC in its investigation into digital censorship and shed light on the intersection between US trade policy and the fight to protect digital rights.