Skip to Content

AI Policy & Governance, Privacy & Data

Surveillance Tech Discriminates Against Disabled People: New CDT Report

Content note: Brief graphic description of sexual violence.

Brandy Mai is a White Army veteran and mother of four who has post-traumatic stress disorder (PTSD) from an attack in which she was drugged and raped. She told me that one of the only things she remembers about the attack was that her rapist was watching her and that she felt “absolutely powerless to stop it because of the sedation.” 

Last year, Brandy took the Georgia bar exam remotely using automated proctoring software, which uses complex algorithms to flag people for potentially suspicious behavior (as opposed to remote proctoring, which enables a stranger to view a test-taker in what may well be their home environment). She told me that the automated software brought those memories rushing back – a classic symptom of PTSD. 

“I knew I was being watched and was powerless to stop it,” she said, “especially on the sections where there were graphic rape questions with no trigger warnings.” 

Brandy failed the bar. 

And she’s not alone. Early in the COVID-19 pandemic, the National Disabled Law Students Association published a report documenting dozens and dozens of concerns from law graduates with a range of disabilities that various jurisdictions failed or refused to accommodate when implementing remote proctoring. CDT put out its own analysis of why both technologies pose serious concerns of discrimination for disabled people.

Disabled test-takers reported concerns with involuntary movements caused by cerebral palsy or Tourette’s syndrome, atypical eye gaze because of blindness or autism, or even inability to leave the view of the camera to attend to medically necessary food and bathroom breaks due to conditions as wide-ranging as diabetes to Crohn’s disease. And, in an infuriatingly circular problem, people with underlying anxiety were concerned that merely knowing the software was present would exacerbate their anxiety and therefore compromise their performance.

Automated and remote test proctoring software programs are not new, but have vastly increased in usage since the onset of the pandemic, joining a panoply of other algorithmic surveillance technologies that monitor what people do at home, at work, at school, and in many other parts of everyday life. 

This example is just one of many stories of disabled people experiencing harm in the use of automated, algorithmic, and artificially intelligent technology in a variety of contexts. In our latest report, we’ve examined four major areas where such algorithmic surveillance technologies disproportionately risk harm to disabled people or outright discriminate against disabled people:

  1. Education. Students in K-12 schools and institutions of higher education have faced increased use of monitoring on school-issued devices used for remote learning, social media monitoring, and on-campus surveillance including aggression-detecting microphones and facial recognition programs. Schools have implemented these technologies ostensibly to promote public safety, address bullying, reduce suicide, and prevent mass shootings. These forms of surveillance negatively impact the school environment for the most marginalized students while risking further profiling of and discrimination against disabled students.
  2. Criminal legal system. Cities and counties have continued to adopt predictive policing software and algorithmic risk assessment systems that disproportionately impact low-income people, people of color, and disabled people who are already overrepresented in arrests and in jails and prisons. Meanwhile, a growing number of companies promise use of background screening tools for purposes like tenant screening that use questionable data and sometimes illegal standards to exclude people with arrest records or histories of surviving domestic violence or poverty from housing.
  3. Health. Disabled people worry about increased development of medications that technologically track compliance and transmit data not only to their doctors, but to insurance companies and potentially third party vendors. Others risk discrimination enabled by development of predictive analytics aimed to identify people with a range of mental health and developmental disabilities and even predict which people will be diagnosed. Still other workers, many of them also disabled and providing home care for people with disabilities and older people, are now subject to intrusive monitoring through federally mandated “electronic visit verification” software programs that often track their exact GPS location.
  4. Employment. During the pandemic, workers, including delivery drivers, warehouse workers, and office workers doing their jobs remotely, have all faced a sharp increase in the use of “bossware” – automated software programs that monitor how they do their jobs, where they are, and how efficiently they work, while minimizing downtime or the opportunity for breaks. Disabled workers face the risk of exacerbated disabilities and chronic illnesses, and retaliation if they disclose and request accommodations from the software. Meanwhile, other workers have seen their employers incentivize the use of wellness programs that may collect sensitive, personal data about their health, and potentially exclude disabled employees from participation. 

Disabled people are simultaneously hyper-surveilled and often erased in society. On one hand, we are treated as potentially dangerous, deceptive, or wasteful, and thus subject to technologies that seek to flag noncompliance, cheating, or fraud. But on the other hand, we are treated as functionally invisible, and thus left to contend with the damaging and discriminatory impact of technologies that seek to drive efficiency, maximize production, and promote wellness without considering their deleterious effect on the physical and mental health of disabled people caught in their wake. 

The ableist impact of algorithmic surveillance technologies demands critical attention from researchers, developers, and policymakers. COVID-19 is not over, but even if and when society transitions to a point where COVID-19 is no longer a significant force shaping our world, these and likely more surveillance technologies will be here to stay. Those technologies that exacerbate and deepen existing harms and risks of harm that threaten people’s safety, health, and freedom, demand urgent intervention to limit the damage those technologies can do.