Skip to Content

CDT Research, Privacy & Data

From Our Fellows: How Online Behavioral Advertising Harms People

By Sauvik Das, Assistant Professor, Carnegie Mellon University and CDT Non-Resident Fellow and Yuxi Wu, PhD candidate, Georgia Tech School of Interactive Computing

Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT.

Online behavioral advertising—the practice of serving users advertisements based on interests and contexts inferred from their personal data—is the cash cow of surveillance capitalism. It undergirds a multi-trillion-dollar digital economy. Today, at least six of the top 10 most valuable companies in the world — Apple, Microsoft, Alphabet, Amazon, Nvidia, and Meta — benefit from the collection, processing, or sale of personal data. Together, these six companies have a combined market value of nearly 10 trillion dollars which is higher than the GDPs of the world’s largest economies, namely the U.S.A. and China.

Paradoxically, the “value” of your personal data is worth little—if you were to believe the courts. After one of the most egregious data breaches in recent history, where Equifax leaked the personal financial records of nearly 148 million Americans, the vast majority of affected individuals received redress of less than $20, with the median being closer to $5. A similar pattern is expected to play out for the Facebook-Cambridge Analytica scandal, where the average affected individual can expect compensation in the range of approximately $30.

Why this discrepancy? We can rationalize it. Indeed, the monetized value of any individual’s data may not, in fact, be much; the true value of personal data may well come from the aggregation of many people’s data and attention available for collection, inference, and sale, and from the insights this “Big Data” makes possible. But if our data really isn’t worth much, why are the costs of surveillance capitalism so high, and why does it feel so creepy, so unnerving, so extractive?

We’ve known for years that people find these ads and the surveillance infrastructure that they necessitate, “scary” and “creepy”. Of course one might find it creepy when Target infers that a teenage girl is pregnant based on her purchases, and divulges this information to her family before she’s ready. It would make sense if one felt unnerved seeing an ad about migraine medication following a confidential conversation about one’s susceptibility to migraines. And, it can indeed feel exploitative to see advertisements for funeral services shortly after a loved one has passed away. These experiences can directly translate into negative changes in the way people feel about themselves, represent themselves to others, and exercise control over their lives. 

Such changes are all harms. Harm is both a colloquial and legal term, which Black’s Law Dictionary defines as “injury, loss, damage; material or tangible detriment.”  But whereas financial losses and physical injury can be clearly identifiable as harms in a court of law, many legal scholars have discussed how privacy harms — like those associated with online behavioral advertising (OBA) — are less well-understood and often unrecognized.

Some of the harms of OBA might seem innocuous. For example, maybe you search for something personal at home, and see ads for it on a work device and feel embarrassed. These small, seemingly mundane events can accumulate into a loss of control of your interpersonal context and the feeling of being constantly surveilled without consent.

Other harms are more egregious. Ad delivery platforms may make inferences about something you’re sensitive about, and create stereotypical or discriminatory portrayals of you based on those inferences. These inferences can include things like demographic characteristics (e.g., age, gender identity, sexual orientation, race), mental and physical health conditions, and unhealthy body stigma.

In our paper, “The Slow Violence of Surveillance Capitalism” we argue that having a more specific understanding of the diverse harms that people experience with OBA can help us legitimize these harms in a legal or regulatory context—even when it is difficult to measure the harms in dollars and cents. In our work, we surveyed 420 people about their recent experiences related to invasions of privacy associated with OBA. Participants reported four major categories of harm: 

  • Psychological Distress, or the broad negative mental or cognitive effects related to OBA in general. Defining characteristics include general emotional distress; disruption of browsing experience; information redundancy; questioning own browsing behavior; and paranoia from suspicion of eavesdropping. For example, you might live in fear that your devices are listening to your conversations because the ads you get seem too specific: you swear you only talked about a specific toothpaste brand out loud, and never searched for it.
  • Loss of Autonomy, or the denial or limiting of opportunity to make your own choices. Defining characteristics include lack of consent or control over targeting; lack of control over self-presentation; encouraging negative spending habits; and limiting consumer choice. For example, maybe you searched for lingerie on your personal computer but then a colleague notices lingerie ads on your work computer; you feel embarrassed and worried about how they perceive you.
  • Constriction of User Behavior, or the alteration of user interactions with technical systems in response to other OBA harms. Defining characteristics include a number of chilling effects like; taking additional privacy-protecting measures, restricting utility of personal devices, deleting accounts and disabling devices, and altering natural conversational behaviors. For example, one might find oneself spending too much time learning about ad blockers and deleting social media accounts to avoid the ads; this laborious intrusion inhibits your ability to freely enjoy browsing the Internet.
  • Algorithmic Marginalization and Traumatization, or harms specific to personal characteristics (e.g., demographics) or vulnerabilities (e.g., sensitive medical information). Defining characteristics include violation of boundaries; amplification of self-consciousness; traumatic triggers; and fear of social exposure. For example, perhaps a family member has recently passed away but now, all you see online are ads for funeral services when you just want to mourn in peace.

Where do we go from here?

Our work illustrates lived experiences of harm associated with OBA. Not all of these harms will be easy to quantify or express in monetary terms. So, the question remains, what can we do with this understanding to effect change in legal and regulatory contexts? 

One path forward is to systematically document evidence of these harms at broader scales. Not everything can be quantified, but there may be ways to quantify proximal measures that speak to some of these harms. The Brave browser, for example, calculates the amount of time and bandwidth saved by blocking ads and turns this into a monetary value—an under-tabulation, to be sure but perhaps one that can be meaningful when aggregated across millions of people. Third-party watchdogs can also help aggregate a sufficiently substantial number of harms from users to present to regulatory bodies like the Federal Trade Commission (FTC). One possibility could be the creation of a platform for “violation tracking” where people can share experiences of harm — similar violation tracking platforms have been used to, for example, allow workers to report workplace harassment and abuse anonymously.

However, since regular users often don’t have the technical expertise to understand the algorithms behind OBA and aren’t often able to directly tie its harms to the perpetrators, identifying indirect evidence of harm is also a pathway forward. Echoing arguments by Ashley Gorski, a senior staff attorney at the ACLU, it is often easier for users to produce evidence that they are taking preventive or protective actions to combat OBA. Actions like installing ad blockers, using VPNs, or regularly clearing cookies can be evidence of the third type of harm in our typology: constriction of behavior. Another area of future work could thus include documentation of action taken to avoid OBA, along with efforts to convince others to do the same. 

Once we have a repository of evidence pointing to lived harms, sourced from the users who have directly experienced those harms, it will be important for a multi-disciplinary team of experts to translate these raw data into formats that would make for compelling policy briefs. We envision the creation of an infrastructure that facilitates structured dialogue between the public at large and regulatory agencies, like the FTC, who often seek evidence of harm when deciding where to focus their efforts.
What’s clear is that something must be done and that we can’t look only at monetary value when comparing the benefits versus costs of online behavioral advertising. Yes, it is incredibly lucrative for these multi-trillion-dollar tech giants. And, yes, maybe the monetary value of our data in isolation is worth far less than what the tech giants can extract when given free rein over it. But the costs are much more nuanced and multi-faceted. The benefits are economic, the costs are more human. We need to be better, as a society, at measuring these costs.