Skip to Content

European Policy, Privacy & Data

GDPR: Avoiding Harms and Expanding Risk

“But what’s the harm?” Far too often, this is one of the biggest questions posed in debates about the value of privacy and the costs of violating it in the United States. Just last fall, the Federal Trade Commission conducted a workshop exploring the contours of “informational injury”, in which CDT participated. Discussions around the event highlighted a conflict we commonly have with commercial actors: Industry participants repeatedly criticized any consideration of abstract or hypothetical privacy harms, but CDT cautioned that meaningful protections for individuals’ dignity and personal autonomy demand a broader understanding of privacy risk that considers user expectations and concerns.

What Is a Privacy Risk?

When it comes to the economics of privacy, data tends to distort the relationship between companies and individuals. Individuals will likely undervalue their privacy while companies tend to be overly optimistic about the potential value of collecting evermore data. The risks of collecting and sharing data, and the stakes of ignoring these risks, have been borne out over the last six months.

  • In January, Strava, which bills itself as a social network for athletes, was criticized for inadvertently revealing information about the location and movements of U.S. soldiers in conflict zones via its “Global Heat Map”. The incident highlighted further concerns about the potential to re-identify users based on Strava location privacy settings. Not only was this yet another case illustrating the fallacy of protection that surrounds “anonymous” and aggregate data, it also demonstrates how “privacy” issues can be narrowly construed by companies to mean “settings.” Strava acknowledged that it needed to devote engineering and UX teams to simplifying the layers of privacy settings it offered to users. But more or better privacy controls only minimizes the ethical and confidentiality risks that come with defaults that emphasize public release of location information.
  • In March, it was revealed that The Retail Equation, a retailer fraud analytics company,  created a shared database with major retailers to determine which consumers may have engaged in item return fraud. The database included records from government-issued IDs like driver licenses, to which individuals’ shopping histories are appended, and a set of behavior metrics that are used to calculate a consumer’s “risk score” for engaging in return fraud. The big problem: individuals are enrolled in this database and informed of its decisions at time of attempted return without any prior notice or ability to consent. They are neither told which stores use this fraud detection system, nor even what their actual “risk score” might be. Retail Equation has argued that its system permits more lenient treatment for 99% of consumers, but the creation of an opaque, secret system that could negatively impact individuals, from their finances to their reputation, poses serious privacy risks.
  • In April, researchers discovered that the dating app Grindr was sharing encrypted HIV status and other sensitive personal information, including sexual preferences, in unencrypted form with third-party vendors providing analytics services. Grindr stated that this information was only shared with service providers for app improvements, but before it announced that it would cease sharing HIV status, the company also protested that Grindr was a “public forum” and users “should carefully consider what information to include in your profile.” This oft-used refrain damages trust in the online ecosystem as a whole. It not only puts a tremendous burden entirely on users but also shifts the responsibility of organizations to engage in their own risk assessments.

In each of these real-world examples, industry players were caught flat-footed when any careful consideration of potential privacy risks should have surfaced some if not all of these issues. This suggests, at minimum, that industry needs to establish a broader definition of what constitutes privacy risk, one that aligns more closely with real-world examples.

The GDPR’s Calls for Comprehensive Risk Assessments

Emphasizing privacy risk is something all sides will agree that the EU’s General Data Protection Regulation does with aplomb. Recital 75 of the GDPR declares that privacy risks lead to physical, material, and non-material damage. It is especially concerned with, among other things, risks of “discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorised reversal of pseudonymisation, or any other significant economic or social disadvantage.” Loss of control over personal data and data processing involving a wide range of sensitive categories, including health or genetic data or data concerning one’s sex life, are also highlighted as risks to consider. These prongs align with the set of injuries identified ahead of the FTC’s workshop by former Chairman Ohlhausen, but the GDPR goes further, specifically acknowledging concerns about personal profiling and processing large amounts of personal information that can affect a large number of individuals.

Assessing privacy risk does not mean that data can never be collected, used, or shared. It does mean that risks should be carefully weighed against purported benefits. As often as companies bemoan hypothetical harms, they are quick to promote less-than-realistic benefits. The GDPR challenges this framing via multiple mechanisms that require companies to balance and be held accountable for risks.

First, the longstanding legal instrument detailed in Article 6(f) allows companies to process data if their “legitimate interests” are not overridden by the myriad risks identified in Recital 75 to individuals. In order to assess whether a company has a legitimate interest to use information, it must ask whether there are any other ways of achieving the identified interest or result. It must also evaluate the nature of the company’s interest and any mitigations or safeguards that can be put in place. Some privacy advocates view the “legitimate interests” caveat with a degree of trepidation, lamenting that it gives companies considerable discretion and keeps users in the dark, and so European regulators have called for an internal analysis that (1) requires full consideration of a number of factors and (2) cautions against engaging in a straightforward balancing test.

“Legitimate interests” are not new to the GDPR. What is new is the requirement that companies must engage in a formalized “data protection impact assessment” or even consult with regulators for any “high-risk” activities to individuals. Article 35 of the GDPR provides several examples of the sorts of activities that require assessing risk, including (1) automated processing and profiling that has a legal effect or a significant effect on individuals, (2) large-scale processing of certain sensitive data, and (3) systematic monitoring of publicly accessible areas on a large scale — all activities found in the industry examples identified above.

Further, the Article 29 Working Party — soon to become the European Data Protection Board — built on these three examples to identify nine criteria for companies to consider in assessing privacy risks:

  • Evaluation or scoring (including profiling and predicting);
  • Automated decision-making with legal or similar significant effect;
  • Systematic monitoring;
  • Sensitive data or data of a highly personal nature;
  • Data processed on a large scale;
  • Matching or combining data sets;
  • Data concerning vulnerable data subjects;
  • Innovative use or applying new technological or organizational solutions; and
  • When processing prevents individuals from exercising their rights, using a service, or entering into a contract.

European industry organizations have already assembled privacy risk matrices that are comparable to the broader framing of risk proposed by the National Institute for Standards & Technology (NIST) and Professor Daniel Solove’s taxonomy of privacy. The tools for performing risk assessments are already in place — the next step is enacting an omnibus privacy law in the U.S. that adds meaningful enforcement to the mix.

Privacy discussions must include a broader understanding of the risks that come from our data-driven world, or any rules that result risk becoming outdated or meaningless. When privacy protection is cabined into a narrow conception of legally-compliant notice-and-choice about data practices, as has been the case in the U.S. for too long, privacy risks are offloaded onto users — and they are the ones that ultimately bear the informational injury. Perhaps detailed risk assessments existed somewhere within Grindr, The Retail Equation, or Strava, but users never saw them.

Privacy risks are not hypothetical, and over and over again, the emergence of these risks are the byproduct of companies discounting the expectations of users while seeing only benefits for themselves. European privacy law flips that script, and its wide-ranging conception of risk should inform conversations about informational injury stateside. While marketers and venture capitalists insist a broader conversation will stifle innovation or otherwise deny consumers of meaningful benefits, failing to have one leads to a world where privacy — and user autonomy — die a death of a thousand cuts.