AI Policy & Governance, Equity in Civic Technology, Privacy & Data
NYC Draft Bill on AI in Hiring Needs Higher and Clearer Hurdles
New York City is no stranger to the extraordinary potential and pitfalls of artificial intelligence. From school-matching, to pretrial risk assessments, to predictive policing, the city’s use of algorithmic decision-making tools has had unique and often troubling ramifications for historically marginalized groups.
Now, the City Council is considering a draft ordinance that would require automated decision-making tools used in hiring and employment to be audited. The bill, for which the Council held a hearing in November 2020, would mark the first major effort by the City Council to regulate the use of automated decision tools by the private sector.
Automated employment decision tools take many forms that can create employment barriers for disabled people. Some of these mine resumes for key terms that are stereotypically associated with “good” employees, and they can learn to distinguish candidates who are most similar to those who were previously selected. Others purport to use a person’s responses to questions or images to determine personality traits.
Vendors also offer video interview tools that analyze candidates’ facial movements, eye contact, vocabulary, and speech patterns. Still other tools claim to evaluate a person’s aptitudes while playing a set of games. CDT’s report, Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?, explains how these sorts of tools have the potential to worsen disability-based disparities in the workforce and labor market.
The draft ordinance addresses the sale and, to a lesser extent, the use of such automated employment decision tools. It would require vendors to “audit” these tools before sale and then annually thereafter. Audits must “predict” whether each tool complies with the city’s civil rights statute and other laws governing discrimination in employment. The ordinance would also require employers to notify applicants about the use of an automated employment decision tool and what characteristics or job qualifications the tool measured – but the required notice could be provided up to 30 days after the tool had been used.
In many respects, the ordinance represents a step in the right direction. By establishing affirmative obligations for vendors who develop employment decision tools and for the employers who use them, it would signal to those who sell or use these tools that they are responsible for checking for potential sources of employment discrimination. By broadly requiring employers to assess compliance with employment discrimination laws at the local, state, and federal level, the bill covers a wide range of protected traits that are too often neglected when examining bias.
Yet, the bill could and must do more. Guidance regarding employers’ obligations under federal antidiscrimination laws are complex and were, for the most part, written decades before the recent advances in data analytics and machine learning that spurred the development of automated hiring tools.
It mostly remains an open question how courts and agencies will apply antidiscrimination laws to these tools. Meanwhile, failing to protect disabled workers from algorithmic bias would exacerbate the significant hurdles those workers already face in the labor market. Though reliable statistics are hard to come by, the employment rate for people with disabilities is estimated to be 32.8%, compared to 79% for people without disabilities.
For people with mental-health-related disabilities, the estimated employment rate may drop to as low as 20-25%. These misconceptions, like all biases, can be embedded into automated decision-making tools. No tool is “bias-free” if it allows for – let alone worsens – disability bias.
Because New York City’s ordinance would be the first of its kind in the United States, its impact will be magnified nationally, which is why CDT and disability advocacy groups urge the City Council to ensure that the final ordinance protects disabled jobseekers (see an op-ed that CEO Alexandra Givens recently published in The New York Times about this here). When the ordinance comes back up for consideration, the Council should take a few key issues into account.
The Need for Pre-Assessment Notice
One problem with the current version of the bill is that it only requires notice after an employer uses an automated tool to screen candidates. In particular, it is too late for applicants with disabilities who may need reasonable accommodations. Applicants also need to be informed before their evaluations not only that an employer will use an automated tool to assess candidates, but what the tool measures, how it measures it, and how candidates can access reasonable accommodations if needed.
This is especially important for tools that require engagement or interaction between the applicant and the tool. An applicant should be able to request accommodations if they anticipate or discover that their disability may affect a tool’s accuracy in assessing their ability to perform the essential functions of the job. Further, vendors and employers need to be thoughtful about types of reasonable accommodations and whether they compensate for the many ways disabilities can affect the evaluation.
Consider, for example, tools that require a candidate to play computer games or answer written questions presented on a computer screen. Giving applicants additional time might be a reasonable accommodation for candidates with dyslexia, but for candidates with visual impairments or sensitivity to light, extra time would be of little assistance.
Ensuring That Bias Audits Are Rigorous
The bill should also be more explicit about what a “bias audit” must entail. Done correctly, bias audits should lead to better practices that mitigate biases against people protected by civil rights laws, including those with disabilities. But because the draft ordinance defines a bias audit as simply “an impartial evaluation…to assess [a tool’s] predicted compliance” with antidiscrimination laws, without setting any floor for what the evaluation must entail, a vendor could attempt to comply with the ordinance by conducting “audits” that lack rigor, transparency, or both.
These audits should check for disparate treatment and run statistical tests to detect disparate impact, but this is not sufficient to ensure either fairness or legal compliance. To ensure fairer and more compliant tools, the City Council should consider two more changes to the bill’s language about bias audits:
- Explicitly recognize that compliance requires checking for job-relatedness.
Federal, state, and local antidiscrimination laws prohibit hiring processes that disadvantage workers with protected traits, including disability, unless the processes are based on attributes that are critical to the job. Therefore, an effective audit must verify that the knowledge, skills, and other characteristics that the tool is attempting to assess are job-related. Because certain attributes may be relevant and appropriate for one position but not for another position, this verification must be conducted for each position and each employer for which the tools will be used. The audit should also determine whether aspects of a tool’s design would prevent the tool from accurately measuring job-related attributes in all applicants. - Clarify that bias audits must effectively detect disability bias.
Statistical analysis is a common approach to bias auditing because it simply requires comparing selection rates of people who do and do not belong to protected classes. This is generally ineffective for detecting disability bias. Disability is hard to quantify due to the many kinds of disabilities, the ways in which each disability presents and affects each person, legal restrictions on employers’ ability to inquire about disabilities, and the ableism that discourages voluntary disclosure of disability. To better account for a tool’s impact on disabled workers, the bill should require vendors to identify and correct elements of the tool that potentially produce disability bias both during the initial design process and on an ongoing basis.
The bias auditing section should be clarified to ensure that bias audits capture all the ways in which automated assessments can unfairly disadvantage disabled workers. This will require proactively investigating the tool’s design and correcting potential sources of discrimination at the outset. Running statistical tests is not sufficient, particularly in light of the difficulties in quantifying impacts on disabled workers. The bias audit requirements should, in short, recognize that there is no one-size-fits-all way to check with compliance with the full panoply of employers’ nondiscrimination obligations.
Bottom Line
New York City’s bill shows that it is paying attention to how algorithm-driven decision-making affects the city’s workers. This could spark widespread awareness of algorithmic bias in hiring and, done correctly, could lead to much-needed changes in the ways that both regulators and employers approach the increasing use of automated hiring tools.
Now, the City Council needs to recognize that real accountability requires policies that improve access to economic opportunity, including for people with disabilities. When it comes to algorithm-driven decision-making, it’s time to raise our standards.