Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Critical Scrutiny of Predictive Policing is a Step to Reducing Disability Discrimination

Under the national spotlight on discriminatory policing practices, states and localities are reconsidering police surveillance techniques, including predictive policing. Several cities in California are illustrative: Oakland and Santa Cruz discontinued their use of predictive policing in 2015 and 2017, respectively, after finding that bias embedded in the data used to predict where crime would occur resulted in over-policing of communities of color. The cities are now looking to clearly and affirmatively ban these programs outright, with Santa Cruz voting in June to become the first city in the U.S. to prohibit predictive policing. Oakland may be poised to follow suit, and its Privacy Advisory Commission specifically proposed requiring attention to the race of people subjected to police surveillance technology.

While discussions about predictive policing tend to address race-based over-policing exclusively, one point is vital yet neglected: the racial implications of predictive policing are inherently disability implications, as we cannot talk about one without talking about the other.

Decades of predictive policing have resulted in biased over-policing and little else.

Using technology to collect and manage crime data has a long and checkered history. CompStat, a set of data management practices, was adopted first by the NYPD in 1994 as a result of transition from paper to digitized crime statistics. Over time, CompStat created more pressure to increase police stops, and in 2010, NYPD officers admitted to misrepresenting crime data. CompStat was modernized into predictive policing, which is described as an “outgrowth” of CompStat because algorithmic development expanded the accessibility and analysis of crime data. 

Predictive policing has remained prevalent across the country for nearly a decade, and has taken two forms: person-based and place-based. In 2013, Chicago piloted a person-based program involving a strategic subject list (SSL), or “heat list,” which used data from an individual’s arrest records and social network to assign high risk of involvement in shootings. A RAND study of the program found that it did not distinguish between potential perpetrators and victims, failed to reduce crime, and also led police to close cases prematurely. The civil rights group Upturn found that over a third of people on Chicago’s SSL had no history as perpetrators or victims, and instead higher risk was mostly attributed to younger age. Inclusion on the SSL, for any reason, was itself a predictor of arrest. Chicago stopped using the SSL in January 2020. 

A far more common place-based system, PredPol, causes over-policing by encoding law enforcement’s geographic biases about gang activity, and failing to account for geographic shifts in activity over time. It also tends to create vicious, self-perpetuating cycles where decisions to target particular communities for intensified policing result in more arrests and prosecutions, which policymakers and police departments then use to claim that such increased policing is necessary. Research has consistently shown that predictive policing only correlates with arrest rates, not actual crime rates, as much crime does not result in arrests.

Predictive policing exacerbates the targeting of disabled people.

The main connection drawn between predictive policing and discrimination in current discussions is racial bias, but disability is inherent within systemic racism. Disability is overrepresented in communities of color, especially among low-income Black people. Every marginalized community experiences higher rates of disability because of health disparities, including in maternal and reproductive care; environmental racism; racial trauma; and other lasting impacts of other forms of violence. Despite higher disability rates, disabled people of color (especially Black, Native, and Brown disabled people) are less likely to be accurately identified and receive necessary support — and thus are more likely to be arrested, profiled or otherwise targeted by the criminal legal system. Because predictive policing programs use zip code and other proxies for race and income as data points, these programs in turn target disabled people.

Numerous statistics reflect the disparity in how much more likely disabled people are to encounter police, and be presumed by police to be violent. A 2001 FBI report stated that developmentally disabled people are seven times more likely to encounter police. Encounters are even common between police and disabled students: during the 2015-2016 school year, disabled students, especially Black and Brown disabled students, were most frequently subjected to school-related arrests. This trend continues into adulthood. A 2017 Cornell study found that disabled people, especially those with cognitive and emotional disabilities, were 44 percent more likely to be arrested; 55 percent of disabled Black people had been arrested compared to the nearly 40 percent of disabled white people. Further, over 80 percent of those incarcerated are estimated to be disabled. Police encounters and arrests generate data for predictive systems, making them more likely to identify disabled people as higher-risk.

Even when they are meant to help, predictive systems harm disabled people experiencing mental health crises. Two factors that result in use of force undermine crisis intervention training programs. One is the actual threat level measured from active resistance incidents by residents, while the other is a biased police perception that the percentage of non-white residents indicates threat. Routine wellness checks can result in involuntary hospitalizations that further add to a disabled person’s record. Disabled people are therefore uniquely susceptible to targeting through predictive policing.

Disabled people are at greater risk of deadly force from police, and this risk increases even more when predictive systems target them.

While complete statistics are hard to come by because federal databases have failed to fulfill their responsibility to report reliable statistics on fatal police encounters, there is substantial evidence that disabled people are disproportionately killed by police. The Washington Post’s Police Shootings Database shows that since January 2015, out of a total of 5,437 people killed by a police shooting, 1,217 had a mental illness. According to Talila Lewis, Volunteer Director of Helping Educate to Advance the Rights of Deaf Communities, estimates that up to one half of people killed by police are disabled may be too low because the methodology behind the statistic requires that people have been diagnosed. 

Research shows that predictive systems can exacerbate or even outright cause the very harms they were designed to prevent. For example, AI-based suicide prediction programs are designed to scour medical records and social media platforms to support intervention and prevention. When these systems flag social media communications that indicate a mental health crisis and trigger emergency calls, however, police see the crisis first and foremost as a threat and respond ready to use force. As a result, wellness checks too often end with deadly police force, causing the harm that wellness checks are supposed to prevent.

Local governments should continue to closely scrutinize predictive policing.

Local decision-makers’ recent actions to address these harms are part of a larger trend. Oakland is one of thirteen jurisdictions to date to pass community-based police surveillance legislation, with twenty-one more bills in the works. As states take varied approaches on surveillance issues, CDT, the ACLU, the NAACP, and EFF have called on local governments to address racially discriminatory policing, including through careful scrutiny of surveillance technology.

The disproportionate and pervasive harm of predictive policing on disabled people demonstrates the need for solutions to its dangerous flaws, up to and including a ban. Oakland recognized that relying on data analysis to prevent crime should not come at the expense of harming marginalized communities. In trying to achieve this balance, an exception in Santa Cruz’s new ordinance would allow use of predictive policing with the City Council’s explicit approval upon a scientifically validated finding that the technology would not perpetuate civil rights violations. The city must address residents’ concerns that this exception leaves the door open for the city to resume over-policing as the technology becomes increasingly sophisticated. While Santa Cruz and Oakland are taking steps in the right direction, much greater attention to the impact on disabled people is necessary before any predictive policing system operates at the state or local level.