Skip to Content

Privacy & Data

Congress is Writing a Privacy Law. It Must Address Civil Rights.

Last week, the Senate Committee on Commerce, Science, and Transportation held the latest in a string of congressional hearings aimed at figuring out what a U.S. privacy law should look like. We’re heartened that so many members in both parties understand the need for comprehensive privacy protections and the failure of “notice-and-consent” models. Several witnesses and members emphasized the need to address discriminatory data practices and protect marginalized groups. The unregulated collection, use, and sharing of data disproportionately burdens people of color, people with disabilities, people experiencing poverty or homelessness, survivors of gender-based violence, and other underserved populations.

As CDT and others have repeatedly told Congress, privacy proposals that do not address unfair and discriminatory data practices are inadequate and squandered opportunities. Last month, CDT held a briefing on these issues in the House of Representatives, in partnership with a broad coalition of civil rights and consumer groups and Rep. Bobby Rush. An important takeaway from this briefing was that privacy discussions too often ignore the lived experiences of marginalized groups, and many proposed solutions to address privacy problems do not tackle some of the hardest issues impacting these communities.

At last week’s hearing Sen. Cantwell asked if there was a “running list” of discriminatory data practices. While not a comprehensive catalogue, here are a few recent and ongoing examples:

  • Location data sharing threatens physical safety. Companies have routinely been cavalier in how they share location data. For example, late last year, The New York Times revealed that many of the apps that collect location information for localized news, weather, and other location services repurpose or share that information with third parties. In early 2019, mobile phone carriers were caught (again) sharing location data with third-party aggregators—data that has ended up in the hands of bounty hunters and correctional officers. The ability to track an individual’s location with ease can put them in grave danger. Stalkers, aggressive debt collectors, and the watchful eyes of law enforcement use this location data to surveil and harass vulnerable individuals. Their recourse is limited. The National Network to End Domestic Violence (NNEDV) advises survivors who are concerned they may be tracked to consider leaving their phones behind when traveling to sensitive locations or simply turning their phones off altogether. No one should have to make the choice between using a cell phone and being safe from stalking and abuse.
  • Housing, credit, and job ads can be skewed based on race, gender, and other protected characteristics. A number of studies have shown how online advertising can exclude people from critical opportunities based on protected characteristics, both through advertisers’ targeting decisions and through ad delivery algorithms. Regulators are beginning to take action in this area, but the practical reality is that these systems are far too complicated for individuals to navigate or meaningfully control. An ad-supported internet should not mean one where individuals are deprived of important opportunities
  • Commercial surveillance tech can take advantage of power imbalances. A host of technologies on the market, including facial recognition and video surveillance, make spying on people easier. While some are explicitly marketed for that purpose, many others are sold as home security or smart home solutions. Without appropriate safeguards, the design and proliferation of these products can facilitate abuse. A recent report from OneZero describes how smart lock company Latch allows landlords to track their tenants. Face recognition and analysis are also becoming more common in brick-and-mortar retail settings. While some face recognition systems require customers to enroll, others are more surreptitious. At least one company, Faception, is marketing “facial personality analytics” to retailers. The technology is marketed as a way for retailers to automatically learn about a customer’s personality based on their face, such as whether they are a compulsive shopper. Surreptitious face surveillance can be particularly harmful to people of color (especially darker skinned women) who are often underrepresented in training data and are misidentified or misclassified at higher rates. The discriminatory effects of face analysis are likely to be compounded if it is used to make inferences about people’s personalities and determine how they will be treated in stores.
  • Take-it-or-leave-it privacy policies disadvantage low-income Americans. The irony of so-called “notice-and-choice” is that it gives people very little choice in whether and how they share personal information. If companies only have to notify users of their data policies and get “consent,” people are usually forced to accept overly permissive data agreements; foregoing the use of an app or service is rarely a real option. This lack of choice is exacerbated for low-income Americans, who disproportionately rely on mobile technologies and may not be able to shop around for devices or apps that provide a higher standard of privacy. Incentive programs such as grocery store loyalty cards and mobile provider discounts are frequently offered in exchange for broad access to (and sale of) customer data, and low-income customers are least able to pass up these offers.
  • Deceptive interfaces exploit vulnerabilities. User interfaces are sometimes designed to trick people into sharing more information than they intend to. This is what happened when Cambridge Analytica used a seemingly benign quiz for research on manipulating voters. Deceptive interfaces often take advantage of low digital literacy or other vulnerabilities. One example of this is webpages that ask for personal information ostensibly for insurance quotes, credit checks, or college scholarship applications. While these landing pages can appear to be run directly by a creditor, insurance company, or education organization, they are often run by “lead generators,” who sell the personal information to other companies.
  • Data brokers can feed discriminatory decision-making. New transparency requirements in Vermont have shed some light onto the shadowy data broker industry. Data brokers aggregate information from multiple sources; create profiles, scores, or reports on individuals; and provide them to third parties including employers, landlords, and insurers. These profiles or scores are often based on inaccurate or incomplete information and can affect people’s opportunities in ways no one understands. In 2013 and 2014, FTC and Senate Commerce Committee reports documented that data brokers had created consumer profiles in categories like “rural and barely making it,” “ethnic second-city strugglers,” “retiring on empty: singles,” “diabetes interest,” and sexual assault survivors.

While privacy legislation alone cannot solve data-driven discrimination, it can and should include protections against many of these data misuses. At a minimum, legislation should do the following:

  • Prohibit unfair data practices. Relying on a notice-and-consent model to protect privacy puts an untenable burden on individuals and does nothing to address the misuse of data once a person has technically consented to share it. Instead, Congress needs to limit what companies can do with data. For example, information like biometrics (such as face prints) and precise geolocation should only be used when they are required to provide a specific service that the person has requested.
  • Prohibit deceptive interfaces. The FTC has the authority to enforce against deceptive practices, but Congress should re-affirm that deceptive user interfaces—like research studies disguised as quizes or lead generators disguised as application forms—are unlawful.
  • Provide FTC rulemaking authority to regulate unfair and discriminatory advertising practices. The agency needs more firepower to ensure that civil rights are protected in the online advertising ecosystem.
  • Provide individual rights to access, correct, and delete data. Individuals should be able to reasonably access and delete their personal information held by companies, and to correct information used to make critical decisions about them, such as credit, housing, and employment eligibility.

If you’d like to learn more about how these rights can be implemented please see our page on the need for federal privacy law.