AI Policy & Governance, Privacy & Data
Examining NIST’s Latest Approach to Managing Bias in AI
The latest draft guidance from the National Institute on Standards and Technology (NIST) on bias in artificial intelligence is an exciting bellwether. For years, social scientists, activists, and civil society have been sounding the alarm about the harm that biased AI systems can cause to individuals and the public at large. However, those who investigate bias in AI systems have traditionally been siloed off from the technologists who build AI systems.
NIST’s latest guidance, A Proposal for Identifying and Managing Bias in Artificial Intelligence, helps bridge the gap between social scientists and computer scientists to help give developers some of the tools they need to avoid bias in their products from the beginning.
In CDT’s comments on this proposal, we highlight that there are still more lessons from social scientists and civil society from which NIST could take a cue. Many of the harms of biased AI that NIST’s proposal mentions occur at the individual level, such as in health care, employment, and criminal justice. CDT has long advocated on behalf of addressing these issues, but we believe that it is also important for NIST to address the harms that biased AI incurs at the population level, particularly on social networks, where small bias, magnified over hundreds of millions of users, can have a huge effect.
The proposal also underplays the role of transparency and community input in addressing these harms. It is not simply enough to have delegated authorities comb code for bias — journalists, researchers, and marginalized groups should all have a role in catching and diagnosing bias in the wild. Transparency can also help with managing bias once uncovered, another topic where the proposal can be strengthened and expanded. Once bias is found in AI systems, there need to be means of redress and public monitoring and evaluation to make sure that the bias is truly mitigated.
For more details on the CDT recommendations for A Proposal for Identifying and Managing Bias in Artificial Intelligence from NIST, you can read our full comments here.