Report – To Reduce Disability Bias in Technology, Start With Disability Data

This report was also authored by Bonnielin Swenor, Director of the Johns Hopkins Disability Health Research Center.
Introduction
When people with disabilities interact with technologies, there is a risk that they will face discriminatory impacts in several important and high-stakes contexts, like employment, benefits, and healthcare.
For example, many jobs use automated employment decision tools as part of their hiring process. These can include resume screeners and video interview tools that use algorithms to analyze things like vocal cadence or eye movements. These tools can unfairly screen disabled applicants from jobs by, for example, flagging the unusual eye movement of a blind or low-vision individual and removing them from the applicant pool as a result.
People with disabilities have also been deprived of their benefits when algorithms have been integrated into benefits determination systems, such as those that decide how many hours of home-based care a disabled person can receive through Medicaid. This, then, impacts the ability of those individuals to live independently. Algorithms are also being incorporated into healthcare decision-making systems, such as playing a role in determining who stays in a hospital versus being discharged, as well as who receives opioids as part of post-surgical treatment, and much more. When these algorithmic systems create biased outcomes, people with disabilities can experience reduced health outcomes – and, the impacts of this technology-facilitated discrimination can be amplified for multiply-marginalized disabled people (including disabled people of color, and disabled LGBTQ+ individuals).
Disability rights and disability justice activists have a long history of fighting against discrimination that impacts disabled people. While technology-facilitated disability discrimination may be newer forms of older injustices, it is not going anywhere. Indeed, as technologies – algorithmic and otherwise – continue to become incorporated into everyday life, and as people with disabilities interact with them more and more, disparate and problematic effects will only increase, both in frequency and in severity.
While it is tempting to write off this bias as the result of the so-called algorithmic “black box,” disparate and discriminatory algorithmic outcomes can often be linked back to problems with the data on which models are trained – and better data is likely to produce better results. Moreover, incomplete or erroneous data sets impact more than just technology. Data that is collected and used to quantify and generate insights about people with disabilities can also inform advocacy efforts for disabled people, including demonstrating the need for and supporting the development of disability-inclusive policies, allocating funding for public benefits, and upholding disability-related civil rights laws. In order to tackle technology-facilitated disability discrimination – and improve the lives of people with disabilities overall – it is first necessary to understand, and then mitigate, the problems endemic to disability-related data.
This paper identifies the various ways in which data sets may exclude, inaccurately count, or be non-representative of disabled people. It unpacks the factors that result in poor collection and availability of representative data sets, and provides recommendations for how to mitigate these concerns, which we collectively refer to as a “disability data justice” approach.
We highlight several recommendations, including:
- Disability data should be collected in all contexts where other demographic data is collected.
- Data should be collected and stored in ways that are respectful of personal and data privacy.
- New and more inclusive methods of both defining disability and collecting disability data must be developed.
- Practitioners should embrace a growth mindset around disability data.
- People with disabilities should be included in the creation, deployment, procurement, and auditing of all technologies.
- Disabled people – particularly disabled leaders and those with technology, disability rights, or disability justice expertise – should be centered in the creation and implementation of technology and AI policies.
- Data should be collected and stored in ways that are accessible to individuals with disabilities.
While significant changes in data collection are needed to inclusively design algorithmic systems, these changes are possible – and necessary.