Skip to Content

AI Policy & Governance, Government Surveillance, Privacy & Data

Face Recognition Principles are a Step Forward But Congress Needs to Act

There are very few things more personal than our faces. Our facial expressions convey our strongest emotions and can betray our most subtle desires, and companies are eager to leverage the uniqueness of faces in everything from photo tagging to authentication, marketing, and customer intelligence. Despite the intrinsic sensitivity of such personal data, there are no firm rules governing the use of facial recognition technologies (FRTs), and notwithstanding efforts by CDT and others, there are no federal laws regulating the collection, retention, and sharing of biometric information.

Recently, FRTs have been headline news. Big technology companies like Microsoft and facial recognition vendors themselves have begun to call for conversations on how to regulate FRTs, and even industry associations have suggested that the “right protections” must be put in place. What those protections should be is an open question. One challenge is that it is unclear exactly what the proper scope of the debate around FRTs should be. Non-binding guidance from the Federal Trade Commission was released in 2012, and an industry-led process at the NTIA in 2016 resulted in minimal best practices for companies deploying face tracking.

Two years later, the Future of Privacy Forum (FPF) has added its take with a set of privacy principles for the deployment of commercial FRT. While FPF’s principles will inform policy debates about the privacy impacts of face tracking, they avoid addressing the thorniest privacy and ethical issues posed by unregulated collection of our facial information. Bright line rules may be needed, but these principles include several large exceptions that minimize individual’s possession of their own faces and could encourage aggressive data practices by companies.

Facial recognition has many different use cases, and the technology itself can mean everything from simple facial detection (merely recognizing a human face) to more advanced facial classification (identifying behavioral characteristics or clustering photos) or individual identification. These distinctions can be helpful when evaluating different facial recognition technologies, such as online photo tagging and sorting services, and the localized facial recognition systems on consumer products like the Apple iPhone X or Google Clips Cameras. However, the use of facial recognition technologies to surveil individuals when they are in public via technologies like digital signage or CCTV is more problematic from a privacy perspective.

CDT’s position has always been quite clear: First, companies should generally obtain informed, affirmative consent from individuals prior to identifying them via facial characteristics in public places or in places open to the public, such as stores. Second, companies should provide consumers with clear, prominent notice of any use of facial recognition technologies they use in public places. The FPF’s principles recommend these practices, but there are some important exceptions that provide companies with easy avenues to get around asking for permission.

Unexpected Consent Exceptions

The FPF principles include broad exceptions that cut against a baseline assumption that consent should be required before collecting facial data. The principles appropriately start by emphasizing the importance of getting express, affirmative user consent, but there are a number of caveats that warrant further consideration and clarification by the FPF.

Because facial recognition encompasses many different types of technologies, the principles divide up different forms of FRTs to include detection, characterization, verification, and identification. CDT has, in the past, looked at the privacy impact of face tracking as ranging on a spectrum from individual counting to individual targeting and identification. The FPF principles suggest a need to reassess these categories, introducing a separate concept of a “unique persistent identifier” that can be used by businesses to track individual behavior across store visits and locations. The principles clearly exempt collection and use of these identifiers from consent requirements if they are not linked to other personally identifiable information (PII). It suggests that such information be viewed as somehow pseudonymous.

This is problematic for two reasons. First, a unique persistent identifier is, by definition, personal data. As the Federal Trade Commission has explained, “[e]ven without a name, you can learn a lot about people if you use a persistent identifier to track their activities over time on a particular device. You also can communicate with them.” Moreover, pieces of biometric data are both persistent and inherently sensitive. For most of us, our faces are immutable and forever tied to us, regardless of whether it is linked to a credit card, an email address, or a name.

Second, the information that companies exclude from what they consider to be data constituting PII often does not track with individual expectations. The principles single out payment data and purchase history as potential PII, which is important, but presumably generalized financial data or aggregated purchase trends could be connected to these unique facial identifiers. The ramifications of this categorization – and the resulting weaker privacy protections the principles afford biometric identifiers as a result – are huge. For example, a facial template created by ACME Store in California could be shared with ACME stores across the country. These templates could even be pooled together as part of a corporate collective, so ACME could share facial data with BUYMORE and vice versa so long as that information is not tied to a name. The principles state this includes sharing “across one visit, across multiple visits, or locations,” which permits a lot of sharing of information that is outside individual expectations. As retailer return programs have revealed, the potential for customers to find themselves in a Kafkaesque facial recognition system is high. The FPF would be advised to provide further detail and clarity into what it means by “unique biometric identifiers,” the types of customer demographics which may be appended to them, and why such an exception is warranted.

This is especially concerning considering that the FPF principles also do not require consent for any collection of biometric information for use in security, fraud, or asset protection programs. This type of data collection and use exception may be commonplace, but it raises special concerns in the context of facial recognition. The principles do call for companies to avoid meaningful accuracy disparities with respect to race, age, and gender, but the fact remains that facial recognition technologies appear to currently face systemic problems with respect to women and darker-skinned individuals. Solutions have been challenging.

Inform Users…with a Privacy Policy Disclosure

Regardless of whether the practices that result from these broad exemptions make customers feel uncomfortable, the principles do not provide much guidance as to how individuals will “opt-out” from from any type of face tracking. Without requiring consent, exercising this right will require notice of what is going on, but even as FPF’s principles prioritize transparency, they downplay how difficult it is for an average store shopper to learn whether these technologies are even in use. As the ACLU recently pointed out, retailers are not being forthcoming about how they are using FRTs.

This also ignores the reality that avoiding a location can be easier said than done. For instance, if there is one grocery store in a neighborhood, it is likely to be the first place people turn to for basic essentials regardless of how they feel about its deployment of surveillance technology.

The principles also do not provide much detail into how companies should go about providing transparency. Technology vendors are encouraged to make recommendations and provide guidance to stores implementing facial recognition systems, but this raises challenges similar to those facing the mobile location analytics industry, where an FPF code of conduct also required location service vendors to encourage retailers to post model signage. However, as the FTC’s enforcement action against Nomi Technologies revealed, analytics providers lacked the leverage to force retailers into giving up valuable signage space to detail data collection practices. Though “CCTV” warnings have become ubiquitous, it is unclear how “clear, meaningful, and prominent notice” of FRTs can be provided. Responsible retailers will experiment in transparent ways, but the likely result is that facial recognition vendors and many of the companies that use these technologies will simply add additional information through privacy policies that go unread.

Without individual agency over face tracking, individuals have to trust the remaining principles will adequately protect their privacy and be deployed ethically. For instance, the principles give companies wide leeway to use facial characterization technologies to estimate gender, age, or general emotional state so long as no individual information is maintained over time. For other types of facial recognition, we are concerned with the principles’ limited discussion of appropriate retention periods for face data. Biometric information, because of its sensitivity and permanence, demands special consideration for how best to store and how long to keep such data. The principles do recommend that this information be encrypted by default, which is a positive.

Moving from Principles to Rules

With so much attention now on law enforcement’s use of facial recognition, the FPF is to be applauded for trying to improve corporate practices around facial recognition, but the privacy principles are no substitute for clear rules and regulations. The principles go out of their way to state that they should not be a model for legislation. Further, while these principles attempt to capture the full spectrum of Fair Information Practice Principles and apply them to facial recognition, they fall short of the notice and consent provisions in existing law like the Illinois Biometric Information Privacy Act. The principles’ treatment of collection, use, and sharing of biometric identifiers also may not satisfy the requirements of the EU General Data Protection Regulation, which explicitly recognizes the heightened sensitivity of biometric data.

Companies are eager to deploy face tracking for their own ends, but FRTs have the potential to significantly alter our day-to-day existence in the public square (and our understanding and expectation of what constitutes the public square). Principles are only a first step toward placing any sort of meaningful controls that might address public concern about biometrics. Companies and retailers should start by providing much more detail about their biometric data practices, and as Congress and the White House begin to discuss the contours of a federal baseline privacy law, facial recognition technologies deserve special attention.