Skip to Content

AI Policy & Governance, Privacy & Data

Double Dose of FTC Comments Discuss Remedial Measures and Algorithms in Advertising

This fall, the Federal Trade Commission will kick off a series of hearings to explore how the competition and consumer protection landscape has shifted since the 1995 Pitofsky hearings. The Pitofsky hearings focused on emerging high-tech industries, and the public was encouraged to submit comments on either 5 ¼ or 3 ½ inch floppy disk, labeling with the name and version of the word processing program being used — DOS-based programs were acceptable — and “[f]iles from operating systems should be submitted in ASCII format.” Much has changed in the past twenty-three years, but technology as legal, policy, and societal disruptor has remained a constant.

The FTC sought comment on a wide range of issues, and for this initial go-around, CDT submitted comments on two key questions: (1) the FTC’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters; and (2) the implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics. [CDT’s comments to Question 5 are available here, and our comments to Question 9 are available here ]

Boosting Privacy by Design and Data Security Guidance

Among other issues, our first set of comments addresses the need for the FTC both to confront how design decisions impact individual privacy and to further develop its data security regime post-LabMD. “Privacy by Design” is not a new concept, but the emphasis on privacy by design as something that implicates just security engineers and legal teams minimizes how design decisions involving user experience and usability can violate individual privacy expectations and perceptions about how products work or what information is being collected. CDT recommends that the FTC do more to elaborate on the design practices it believes are important to protect privacy. While this includes workshops, best practices, and additional research, it also can include enforcement actions that address design deficiencies.

Section 5’s deception prong grants the FTC authority to act on representations and omissions that are false and misleading. Design decisions can clearly fall under this standard of review, but manipulative and exploitative design choices are not only misleading — they are also unfair to individuals. Design decisions promote the surreptitious overcollection of data, raising both unwanted secondary uses and real privacy risks to individuals.

Another issue currently facing the Commission is how best to police data security lapses and poor practices under Section 5. Data security has to thread the competing goals of flexibility and specificity, but recent court cases, like LabMD, demonstrate the need for the FTC to provide more notice and guidance as to what constitutes “reasonable” data security. The FTC will need to look to existing industry standards and state laws as a baseline and encourage companies to be much more explicit in describing how they protect information and systems, using the FTC’s ability to police deceptive statements as an enforcement tool. Moving forward, the FTC should further leverage the technical expertise of the Office of Technology Research and Investigation, as well as the Commission’s Section 6(b) reporting authority to collect ongoing information about industry data security practices. Continued FTC enforcement activities can send a powerful message to bad actors and will be important as the Internet of Things becomes a growing driver of data insecurity.

Investigating Discrimination in Targeted Advertising

Our second set of comments details continued evidence of data-driven discrimination since the FTC issued its report, Big Data: A Tool for Inclusion or Exclusion?, in January 2016. It describes advertising practices that either exclude minority groups from important opportunities or practices that explicitly target vulnerable groups with disadvantageous offers, and describes how behavioral advertising categories can be proxies for sensitive characteristics even if they appear facially neutral. So-called “lookalike” audiences can exacerbate the situation; for example, if an advertiser’s custom audience already underrepresents African-Americans, that audience list is 33 times more likely to generate a lookalike audience that also underrepresents African-Americans.

Unfortunately, enforcement of existing civil rights and equal opportunity laws has not kept pace with the online advertising ecosystem, where it can be more difficult, even for advertisers themselves, to detect discriminatory practices. We call on the FTC to explore where its existing Section 5 “unfairness” authority can be applied to certain ad targeting practices. Certain ad targeting practices are impossible for individuals to avoid precisely because they are designed to take advantage of people’s known or statistically inferred vulnerabilities (for example, targeting Payday loan ads to people who have recently lost their jobs or gotten divorced). When investigating predatory ad targeting, the Commission should consider whether the targeting involves collecting sensitive data or inferring sensitive information from data and using it in ways that are likely to exploit particularly vulnerable groups.

Consumer protection and inequality issues must be addressed. Despite calls for more consumer education and transparency, individuals are not in a position to see the ways in which targeted marketing practices may be unfairly excluding them or exploiting their personal information. The FTC has studied these issues before and should build upon its expertise with a renewed focus on ensuring that targeted advertising is fair and equitable.