We rarely notice the underlying structures that make up our internet experience. But take a moment for a brief thought experiment – imagine if everything in your Facebook or Twitter feed had to be pre-screened. What if those companies were treated like publishers, such as the Washington Post or the New York Times, and they had to verify that the published work on their site wasn’t false or defamatory or risk legal consequences? The Facebook fact checker would have to call your aunt to confirm she had a strong basis for writing those potentially libelous things about her new neighbors.
It’s impossible, right? If websites were treated like publishers of their users’ speech, the modern internet wouldn’t exist. The sheer resources required would cripple any site and the decentralized, user-driven web would vanish. We take for granted today that no site is legally responsible for the content of its users’ postings. But that wasn’t a foregone conclusion two decades ago when the web was in its infancy. Instead, some companies were suing to hold service providers to exactly this standard. The most notable case was Stratton Oakmont, Inc v. Prodigy where a New York court held that online service provider Prodigy was liable as a publisher in a $200 million defamation suit by the brokerage firm Stratton Oakmont (ironically that same firm would later become infamous for its fraudulent sales tactics as memorialized in the movie, Wolf of Wall Street).
CDT continues to focus on the underlying physical and policy structures that allow the internet to flourish.
Pushed by civil society, including a scrappy young internet rights organization called the Center for Democracy & Technology, in 1996 Congress adopted the Cox/Wyden amendment which became Section 230 of the Communications Act. In relevant part it says that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This amendment, in many ways an aside in a larger debate about pornography, became a linchpin in the legal structure of the internet.
Twenty years later, CDT continues to focus on the underlying physical and policy structures that allow the internet to flourish. As we look forward to the next 20 years, this work on core internet structures will only become more important as we consider issues around access, governance, and structures. Here we outline some of our current focus areas while looking at emerging issues in this space.
Access to Information
The internet is one of the greatest tools for self-expression and access to information ever. But too many people – especially in the developing world – lack internet access. One proposed type of solution is to reduce the cost of internet access through zero rating – eliminating the cost of data to use certain services or to gain basic internet access. Such plans, however, create unavoidable tension with network neutrality, a cornerstone policy for protecting and promoting access to information online. While the debate around zero rating tends to be polarized, CDT’s recent report delineates some key criteria for evaluating whether a zero-rating program that facilitates short-term access to some sites and services can also ultimately increase users’ access to the open Internet.
We’re searching for with a path forward that generates revenue for content providers and websites while respecting the right of the individual to control their digital footprint.
Once users are online, we recognize that the infrastructure for speech is in large part owned and operated by a series of private actors — content hosts, server operators, access providers, network operators, domain name registrars, etc. Given the reality of the technical infrastructure, a legal structure ensuring that these intermediaries cannot be taken to court over the speech that their users post and transmit is essential to supporting free expression online. Intermediaries are also very obvious targets for state censorship; it is much easier to target YouTube with pressure to censor than it is for a government to pursue individual speakers.
Finally, one of the great equalizers for access to information online is the preponderance of free services largely supported by advertising. What do the rise of ad blockers mean for this ecosystem and for journalism and the news business? This tricky question is compounded by the reality that advertising has become the driver of some of the web’s most intrusive tracking. We’re searching for with a path forward that generates revenue for content providers and websites while respecting the right of the individual to control their digital footprint.
Rules for Governance and Governments
The internet is also a physical space. Made up of networks of servers and globe-spanning wires, it is inextricably bound up with the ‘real world’, and with global struggles around jurisdiction and national sovereignty. Who “runs” the internet? How does it interact not just with end users but also regulators and governments?
The original distributed and bottom-up management governance model has been very successful in guiding the internet for so long. A wide variety of stakeholders – civil society, companies, technologists and engineers, and governments – have worked through global entities like the Internet Corporation for Assigned Names and Numbers (ICANN) to set rules for the domain name system. Technical standard setting bodies like the Internet Engineering Task Force (IETF) have long histories of sustained and successful multistakeholder approaches to problem-solving. But recent efforts by some politicians to block the multiyear effort to transition ICANN to a true global body, free from the direct influence of any one government but still accountable to its stakeholders, shows that this model can’t be taken for granted. We must continue to work to sustain the open, participatory nature of internet governance.
The distributed nature of the web also creates challenges for law enforcement. The continuing adoption of cloud services, while creating enormous value for consumers, creates new challenges for law enforcement investigations. As information flows across national borders what rules should governments follow? Should the focus be on where data is stored, the nationality of the subject of the investigation or part of a collaborative process between governments? The answers to these questions will affect the privacy rights of every internet user and are being debated as part of international treaties and in national government deliberations.
Infrastructure and Standards
The infrastructure of the internet has changed remarkably over more than thirty years, while the basic functionality it enables has not – sending information in groups of “packets” from one place to another. Technical standards venues like the IETF and the World Wide Web Consortium (W3C) must increasingly grapple with how infrastructure enables or thwarts human rights online, such as by enabling censorship and undermining user privacy. But what is the right way to champion non-technical – often somewhat political – concepts in technical venues? Do efforts to locate more functionality “in the middle” of the network necessarily mean compromises in the end-to-end nature of internet technology, creating powerful points of control? We work to embed human rights into these standards and demystify the technical underpinnings of the internet in order to explain why these structures are so important.
Encryption works best when individual users alone hold the key to their communications.
More generally, the need to expand the use of strong encryption continues to raise policy issues. It is unquestionably the gold standard for internet security – allowing the secure, private and authenticated communication which enable e-commerce. And the structural characteristics of the internet – all speech is carried to its recipient by multiple parties – necessitate access to strong encryption in order to maintain individual privacy and free expression. But encryption works best when individual users alone hold the key to their communications. This technical reality runs counter to the practices of many law enforcement agencies which typically serve legal orders directly to providers so as to avoid alerting subjects of investigations. How do we address the real needs of law enforcement officers, armed with orders approved by judges, when the subject of a criminal investigation may be the only person with access to the information in that order?
Ultimately, what’s most striking and important about the structures of the internet is their durability. Like the policy approach of Section 230, decisions about how the internet will operate endure not just for years but decades. Over that time, they have a dramatic impact on our internet experience and our ability to enjoy fundamental rights like privacy and free expression. That’s why we will continue to focus not just on individual policy battles, but also to improve the architecture, standards and laws that shape our internet experience. We hope you’ll join us in these efforts.