In recent weeks, multiple apps promising “secret” messaging have had sensitive data exposed by breaches and the apps’ not-so-secret data-sharing practices. This news makes one thing clear: the term “anonymity,” as used by apps that ostensibly enable individuals to post updates anonymously, often promises too much. Many applications promising anonymity collect highly specific user data despite representations to the contrary. Often, this data is monetized through sharing with third-parties and it is almost always susceptible to unauthorized access.
The Whisper incident is an example of this misrepresentation of anonymity. After the Guardian reported that popular messaging app Whisper shares users’ IP addresses with government entities, Whisper conceded that this was true. However the app maintains that the service “does not collect nor store any personally identifiable information (PII) from users and is anonymous.” This position is puzzling for two reasons: first, Whisper’s exclusion of IP addresses from its definition of PII directly contradicts federal authorities’ interpretation of the term – NIST includes IP address in its definition of PII – and secondly, despite how “PII” is defined, simply refraining from collecting PII does not guarantee anonymity.
To the average law-enforcement agent, technologist, or even self-taught hacker, there’s very little anonymity in Whisper’s messaging platform; with the right tools and a log of IP addresses, messages could certainly be linked to a particular user. Technically guaranteeing anonymity is difficult because it requires formally separating a user’s identity from their activity. Software that seeks to deliver anonymity has to functionally ensure that there is no possibility of connecting a user’s identity to the types of services the software provides. Even if this is engineered perfectly, there is always a chance that other metadata may be sufficiently unique to map a user to their real-life identity and, moreover, if a user uses the service in a non-anonymous way – e.g., if they announce to the world their name –any technical guarantees no longer apply.
Companies need to implement “anonymity” in a way that is transparent, meets user expectations and designs this feature directly into the technology provided. It is not enough to refrain from collecting PII or to place responsibility on consumers to decipher what data a company can view or store; many consumers misunderstand how their data is used and do not read privacy policies. Companies must learn to identify consumer expectations around data security and adjust their collection practices and marketing accordingly.
If anonymity cannot be functionally achieved, an app should not brand itself in this manner and should not use terms to describe data practices that obscure, rather than clarify, the amount of confidentiality and anonymity that users will realistically enjoy. “Pseudonymous” services, for example, allow for posting under a fake name, but do not go so far as to promise complete anonymity. Many pseudonymous services (such as reddit and Tumblr) contain information that can be tracked back to real people if ISP logs are subpoenaed by law enforcement or in civil discovery. However, users of these services are arguably more aware and accepting of these and other risks of pseudonymous sharing, since the user is not made to believe that their every interaction with the app will remain completely confidential. Therefore, a company promising complete anonymity should make good on its promise and define technically how they accomplish anonymity.
CDT and the ACLU have noted in a submission to the UN Human Rights Council that many human-rights defenders around the world rely on anonymity tools in order to avoid government censorship or retaliation. Though some activists are Internet-and-technologically savvy and understand the limits of the protection certain apps can offer, many do not. Therefore, an app that invites users to post materials “anonymously” without being able to deliver on that promise may wind up luring some very vulnerable people into danger.
As the international community has generally recognized, businesses should comply with human-rights standards. Legally, only governments are bound by international human-rights treaties; however, the UN Guiding Principles on Business and Human Rights make it clear that consumers can demand that a company such as Whisper prevent adverse human rights impacts linked to its app. In other words, companies should refrain from engaging in kinds of behaviors that, if performed by a government, would violate human rights standards (such as suppressing free speech or tracking the location of individual users despite anonymity guarantees).
Companies should also be transparent about how often they share private data with governments and under what circumstances. As CDT has highlighted, a number of major ISPs and telecommunications companies strive to ensure that all government requests for personal data are lawful and to that end publish transparency reports meant to clarify the types of data the company shares with governments. When companies fail to adequately inform users about their data policies, they undermine a principle that—as the UN High Commissioner for Human Rights pointed out—is essential to human rights.
Finally, CDT has worked with the Global Network Initiative to develop principles and implementation guidelines that can help companies ensure that the human rights of their users are respected when companies face governmental demands for user data. Internet companies must carefully think through the potential vulnerabilities their users face when the company receives a demand for user data; companies should also be prepared to push back against overbroad or vaguely justified government demands.
Anonymous communication services are neither impractical nor impossible, but particularly technically difficult to achieve. Consumers must be wary of claims that an app offers an anonymous service, as one company’s definition of anonymous may simply mean what Whisper meant: a perception of public anonymity, while operationally violating this promise for business and surveillance purposes. CDT embraces the demand for anonymity-enabling tools; there is value to this type of speech, as it allows individuals to feel more comfortable voicing potentially unpopular (but perfectly legal) ideas. However, companies responding to this demand must ensure their services are actually providing the robust privacy protective communication platforms consumers desire.
CDT will discuss these concerns further in future blog posts. Particularly, we will explore the technology behind creating and maintaining “anonymous” messaging services.