Skip to Content

AI Policy & Governance, Free Expression, Government Surveillance, Privacy & Data

Government Watchdog Reports Many Government Agencies Don’t Even Know What Facial Recognition Systems They Are Using

by CDT intern Emma Li.

In a report released to the public on June 29, the Government Accountability Office (GAO) found that 13 of the 14 federal agencies that use non-federal facial recognition technologies to support criminal investigations don’t even know what systems they are using. This failure means that those agencies have no means to assess and mitigate the risks of their use of facial recognition. 

The fact that these agencies lack even basic knowledge about their employees’ use of facial recognition technology is particularly concerning because, in the law enforcement context, inaccurate results can lead to wrongful deprivations of liberty and other basic rights. Studies have established that facial recognition technology lacks accuracy in many cases, and its use can result in the misidentification of darker-skinned individuals and women. 

One report by the National Institute of Standards and Technology, for example, found that facial recognition algorithms were 100 times more likely to produce false positives for individuals from African or Asian countries as compared to lighter-skinned individuals from European countries. Facial recognition technology disproportionately and negatively impacts individuals and communities of color, thus threatening Americans’ civil rights at large. The agencies that do not track their employees’ use could be using highly discriminatory and invasive forms of technology that threaten citizens’ rights without even being fully aware of it. As a result, there is no feasible way for those agencies to have procedures in place to mitigate the risks of using those technologies.

The risks associated with the use of facial recognition extend beyond discrimination. For example, the GAO report noted that six federal agencies used facial recognition technology following George Floyd’s murder. These six agencies primarily reported using facial recognition technology to identify the perpetrators of criminal activity “during the period of civil unrest, riots, or protests.” Law enforcement’s deployment of those technologies during the George Floyd protests or similar events clearly can have a chilling effect on people’s freedom of speech and association, particularly individuals in marginalized communities.

The GAO report’s findings are particularly disturbing, given that they include that ten agencies use the facial recognition system from Clearview AI. Clearview AI is a company already facing lawsuits for its allegedly illegally stockpiled data, and criticism for assembling its gallery of images by scraping them from the internet without consent from the users of the source websites. Clearview AI closely guards the algorithm it uses, so it has not been subjected to public scrutiny, and neither have the effectiveness claims it makes. The company has been dubbed as one that “might end privacy as we know it,” and European lawmakers have pointed to it as one of the reasons for their hesitation to work more closely with the United States on artificial intelligence regulations.

The report made the same two recommendations to each of 13 federal agencies regarding their lack of oversight on facial recognition technologies. Specifically, GAO recommended that the agencies develop and implement methods for keeping track of which non-federal facial recognition systems employees use and, after doing so, that they assess the risks associated with using those systems. 12 agencies concurred with the report’s recommendations, and the 13th concurred with one recommendation and partially concurred with the other recommendation. Implementing the GAO’s recommendations offers a starting point, but the agencies must also take further steps to address the inherent risks of using facial recognition technology. 

The GAO’s report highlights severe shortcomings when it comes to some law enforcement agencies’ awareness and monitoring of their employees’ use of facial recognition technology. Facial recognition gives rise to major risks of harms, particularly to people in marginalized communities, but the agencies that do not even know what technologies are being used are certainly not taking measures to address any of those risks. Even supporters of the use of facial recognition in the law enforcement context should recognize that the lack of awareness GAO has documented, and the consequent absence of any meaningful guardrails on use of facial recognition, is unacceptable. Congress and the Administration should demand better.