Australian officials announced this week that plans for a mandatory Internet filter will go forward. Broadband, Communications, and Digital Economy Minister Stephen Conroy heralded the results of recent live testing, and will introduce legislation in August that will force ISPs to block access to websites around the world that are deemed to contain content that is illegal in Australia. Needless to say, it is disappointing to see a democratic government following China’s lead down the path of Internet censorship.
The plan would require all ISPs to block a subset of an existing blacklist of illegal content. In Conroy’s words, this will include “child sex abuse content, bestiality, sexual violence including rape and the detailed instruction of crime or drug use”—certainly what many would consider the worst of the worst. This is narrower than the original plan to filter the full blacklist, which includes legal content deemed inappropriate for children under 15 when it is not behind an age-verification system. But the system nonetheless poses major problems for free expression online.
First, the lack of transparency surrounding what gets blocked could lead to overblocking. The blacklist is and will continue to be a closely guarded secret (lest it become a road map to the very sites that the government would like to block). This kind of problem has already come up: a version of the blacklist leaked last spring included the URLs of a dentist’s office, an anti-abortion activism site, and a collection of PG-rated photos. Without adequate transparency, Aussies will have nothing but Ministerial assurances that what makes the list is truly worthy of blocking, which can easily lead to repressive abuses of the kind well-known in China.
The notion that keeping the list secret will raise any sort of barrier to accessing banned sites is a fantasy. At best, blacklist web filtering (which doesn’t include peer-to-peer or other protocols) is only effective at preventing accidental or casual access. Determined users will still find there way to blocked material, whether or not the list is a secret.
To Conroy’s credit, he has initiated a public consultation on transparency rules for the process by which sites will be added to the list. Transparency of process would be a positive step, but none of the proposals listed in the request for comment suggest transparency for the list itself.
More troublingly, the promised legislation will include incentives for ISPs to offer subscribers much broader content blocking. While CDT has long championed end-user parental controls in the child safety context, moving such filtering from the PC to the network poses serious risks. It is not clear that users will have any significant control over what gets blocked, nor that they will have any recourse in case of mistakes.
ISP-run filtering will unavoidably overblock access to sites that many people would consider acceptable. The second-level filters included in recent tests blocked between 2-4% of innocuous test sites. The testing firm rightly declared that this overblocking rate was unacceptable, but indicated that rates up to 2% would be acceptable. That might not sound like a lot, but 2% of the billions of pages on the web still means tens of millions of dolphins in the tuna nets. And if you happen to be the site owner of one of the improperly blocked sites, it means an Internet death sentence, at least in Australia.
This kind of mandatory filtering system would almost certainly run afoul of the First Amendment if implemented in the U.S., but unfortunately, examples of such filtering are popping up more and more around the world. Other democracies, such as the UK, have put voluntary systems in place, but it is extremely troubling to see Australia so bent on mandatory filtering, especially in the face of increasing evidence that education and user empowerment are far more effective tools for child protection online. CDT will continue to fight against these overzealous and under-effective efforts, in the US and abroad.