Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers
This report is also authored by Henry Claypool and Wilneida Negrón.
Companies have incorporated hiring technologies, including AI-powered assessments and other automated employment decision systems (AEDSs), into various stages of the hiring process across a wide range of industries. While proponents argue that these technologies can aid in identifying suitable candidates and reducing bias, researchers and advocates have identified multiple ethical and legal risks that these technologies present, including discriminatory impacts on members of marginalized groups. This study examines some of the impacts of modern computer-based assessments (“digitized assessments”) — the kinds of assessments commonly used by employers as part of their hiring processes — on disabled job applicants.
The findings and insights in this report aim to inform employers, policymakers, advocates, and researchers about some of the validity and ethical considerations surrounding the use of digitized assessments, with a specific focus on impacts on people with disabilities.
Methodology
We utilized a human-centered qualitative approach to investigate and document the experiences and concerns of a diverse group of participants with disabilities. Participants were asked to complete a series of digitized assessments, including a personality test, cognitive tests, and an AI-scored video interview, and were interviewed about their experiences. Our study included participants who identified as low vision, people with brain injuries, autistic people, D/deaf and/or hard of hearing people, those with intellectual or developmental disabilities, and those with mobility differences. We also included participants with diverse demographic backgrounds in terms of age, race, and gender identity.
The study focused on two distinct groups: (1) individuals who are currently working in, or intend to seek, hourly jobs, and (2) attorneys and law students who have sought or are likely to seek lawyer jobs. By studying these groups, we aimed to understand potential impacts of digitized assessments on workers with roles that require different levels of education and experience.
Findings
Disabled workers felt discriminated against and believed the assessments presented a variety of accessibility barriers. Contrary to the claims made by developers and vendors of hiring technologies that these kinds of assessments can reduce bias, participants commonly expressed that the design and use of assessments were discriminatory and perpetuated biases (“They’re consciously using these tests knowing that people with disabilities aren’t going to do well on them, and are going to get self-screened out”).
Participants felt that the barriers they grappled with stemmed from assumptions made by the designers in how assessments were presented, designed, or even accessed. Some viewed these design choices as potentially reflective of an intent to discriminate against disabled workers. One participant stated that it “felt like it was a test of, ‘how disabled are you?’” Not only that, participants generally viewed the assessments as ineffective for measuring job-relevant skills and abilities.
Participants were split on whether these digitized assessments could be modified in a way that would make them more fair and effective. Some participants believed the ability to engage in parts of the hiring process remotely and asynchronously could be useful during particular stages, if combined with human supervision and additional safeguards. Most, however, did not believe that it would be possible to overcome the inherent biases against individuals with disabilities in how assessments are used and designed. As one participant put it “We, as very flawed humans, are creating even more flawed tools and then trying to say that they are, in fact, reducing bias when they’re only confirming our own already held biases.”
Given the findings of this study, employers and developers of digitized assessments need to re-evaluate the design and implementation of assessments in order to prevent the perpetuation of biases and discrimination against disabled workers. There is a clear need for an inclusive approach in the development of hiring technologies that accounts for the diverse needs of all potential candidates, including individuals with disabilities.
Recommendations
Below we highlight our main recommendations for developers and deployers of digitized assessments, based on participants’ observations and experiences. Given the harm these technologies may introduce, some of which may be intractable, the following recommendations set out to reduce harm rather than eliminate it altogether.
Necessity of Assessments: Employers should first evaluate whether a digitized assessment is necessary, and whether there are alternative methods for measuring the desired skills with a lower risk of discrimination. If employers select to use digitized assessments, they should ensure that the assessments used are fair and effective; that they measure skills or abilities directly relevant to the specific job, and that they can do so accurately.
Accessibility: Employers must ensure assessments adhere to existing accessibility guidelines, like the Web Content Accessibility Guidelines (WCAG) or initiatives of the Partnership on Employment and Accessible Technologies (PEAT), and that the selected assessments accommodate and correctly assess the skills of disabled workers with various disabilities.
Implementation: For effective, fair, and accessible assessments, employers can take additional steps to potentially reduce biases by implementing significant human oversight in all assessment processes, using assessments to supplement, not replace, comprehensive candidate evaluations, and being transparent about when and how assessments are used.
Download the list of references for this report in BibTeX (.bib) or in .RIS format. These files can be opened in commonly used reference management software.