Skip to Content

Free Expression

Washington Court ruling against Backpage.com is a Setback for Internet Platforms and Online Speech

The Washington State Supreme Court delivered a disappointing decision last week, allowing a lawsuit to proceed against Backpage.com for the use of its classified ads service by sex traffickers. Three minor trafficking victims brought the suit against Backpage, alleging that the website played a “substantial role” in developing the content of the advertisements posted by the traffickers who abused them.

CDT and EFF filed an amicus brief in the case, describing the wide consensus among US courts that Section 230 grants broad immunity to website operators against liability for user-generated content. Importantly, many courts have found that a website operator does not become a content creator merely by describing a content policy or terms of use for its website. We urged the court to grant Backpage’s motion to dismiss.

We urged the court to grant Backpage’s motion to dismiss.

Unfortunately, the Washington Supreme Court found for the plaintiffs, holding that Backpage cannot benefit from the broad legal protections of Section 230 because the plaintiffs alleged that Backpage’s policies defining the kinds of ads it would host “essentially provide pimps with guidelines to have their minor escort ads accepted for posting.” Backpage, the Court ruled, must address the plaintiffs’ assertions that its policies were specifically designed to enable pimps to sell minor victims before the case can be dismissed.

This is a troubling interpretation of Section 230 and is inconsistent with the holdings of other courts. As the Washington Court itself observed, “A website operator … does not ‘develop’ content by simply maintaining neutral policies prohibiting or limiting certain content.” But this is precisely the result of the Court’s decision to allow the suit against Backpage to proceed on assertions about how Backpage designed its content policy and a single bare allegation that Backpage “has a substantial role in creating the content and context of the advertisements on its webpage.”

Congress drafted Section 230, in part, to protect websites from lawsuits based on their moderation of user postings. This protection from liability was designed to encourage voluntary monitoring for offensive, obscene, or illegal content. Section 230’s immunity was designed to shield website operators from defending themselves from bare allegations that they are responsible for the content of their users.

Online providers do not become content “developers” of information content merely because plaintiffs claim that it is so. As we argued in our brief, “Plaintiffs must allege specific, non-speculative behavior that a provider authored or developed the content in question or else the claim must promptly be dismissed.” Where a website clearly participated directly in developing the alleged illegal content, immunity from suit is properly lost. But in cases like these, where the provider has allegedly colluded by the apparent implications of website design and content policies, Section 230 requires that the complaint be dismissed. This is because Section 230 is intended to shield online intermediaries not only from ultimate liability, but also from the cost and uncertainty associated with protracted litigation over claims that should have been promptly dismissed.

A primary goal of Congress in passing Section 230 was to encourage the development of speech-facilitating technologies and to spur innovation and growth of the digital economy. To expose digital speech technologies to recurring liability claims on unsupported allegations that providers “developed” their users’ content would inevitably make technology and online services more expensive, more restrictive, and ultimately less available for public use. Section 230’s immunity from third-party liability helped to create the diverse, expansive Internet we know today.

For anyone who hosts third-party content, the simple takeaway of decisions like the Washington Supreme Court is that they are safer if they do nothing to create and enforce rules against unlawful and abusive activity—and safest if they provide no interactive services at all. This is the opposite outcome envisioned by Section 230 and a dangerous precedent for innovation and online expression.