Next year promises to be an important year for advancing conversations about accountability in artificial intelligence. While civil society has been grappling with the implications of artificial intelligence for awhile, more and more lawmakers are becoming aware of the difficult ethical challenges posed by algorithms and automated decision-making systems.
The New York City Council has taken a proactive step by enacting a bill establishing a task force to explore fairness, accountability, and transparency in automated decision-making systems operated by the city. This is a big deal. The use of these technologies by city governments have real impacts on citizens. Today, in New York City, algorithms have been used to assign children to public schools, evaluate teachers, target buildings for fire inspections, and make policing decisions. However, public insight into how these systems work and how these decisions are being reached is inadequate.
Today, in New York City, algorithms have been used to assign children to public schools, evaluate teachers, target buildings for fire inspections, and make policing decisions.
This poses a serious challenge to core democratic values and responsible (and responsive) government, particularly when automated decision-making systems are deployed by well-meaning but ill-equipped public authorities. In an October 2017 report, the AI Now Institute went so far as to caution against the very use of these systems by public agencies in criminal justice, healthcare, welfare, and education absent public auditing, testing, and review. There are no easy answers, and the black box nature of these systems can make outside scrutiny difficult, limiting accountability and making bias difficult to detect.
New York City has the opportunity to elevate this challenge, and this law has the potential to surface proposals from a new set of voices on improving accountability in artificial intelligence. Moving forward, a diverse task force made up of technical experts, community advocates, and social workers would have 18 months to present recommendations on assessing algorithmic bias, its impact, and how information about algorithms can be made public. Specifically, the bill calls for recommendations on:
- Informing individuals about whether they have been subject to an automated decision, and how they can request and receive an explanation of that decision and its basis;
- Evaluating systems for disproportionate impacts on people based upon age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status;
- What can be done in the instance a person is harmed by an automated decision-making system if a system is found to be problematic;
- What information, including technical information and perhaps ones-and-zeros, can made publicly available to help citizens assess how automated decision-making systems function and are used by New York City; and
- Creating archives of both automated systems and their training data.
These are tough tasks, but they are topics that demand careful consideration. In earlier comments to this bill, CDT suggested that any public deployment of algorithmic decision-making tools (1) be aligned with clear public policy goals and advance the public interest; (2) work as intended (or advertised by government contractors); (3) do not marginalize vulnerable populations or exacerbate inequality; and (4) provide meaningful transparency to New Yorkers so that they can appeal and remedy automated decisions that are incorrect, unjust, or contrary to law.
We hope New York City takes a skeptical eye to the deployment of algorithms that are shrouded in confidentiality demands.
To accomplish these ends, the task force will need to ensure the city is equipped to rigorously evaluate the algorithmic tools at its disposal. This requires not only in-house expertise, but also a willingness to challenge technology vendors. We note the bill provides protections against the disclosure of proprietary information, but overbroad trade secret protections are already being used to foil open records laws and may handicap the city’s own ability to scrutinize algorithms being used on its citizens. We hope New York City takes a skeptical eye to the deployment of algorithms that are shrouded in confidentiality demands.
Still, this first-in-the-nation proposal portends further action by lawmakers in this area. It may well be a model for cities everywhere to develop expertise and engage with technical experts. Government-led processes such as this can provide a credible forum for airing out the most troublesome issues surrounding the use of algorithms and automated decision-making. So-called “smart cities” are all the rage, but everyone could serve to get smarter with respect to algorithms. Now, New York City is positioned to show how the public sector can lead the way in the responsible use of algorithms. While much more needs to be done, there is little question that the broader technology community, academics, and the Center for Democracy & Technology would be eager to collaborate in the effort in the new year.