Two years ago, CDT embarked on a project to explore what we call “digital decisions” – the use of algorithms, machine learning, big data, and automation to make decisions that impact individuals and shape society. Industry and government are applying algorithms and automation to problems big and small, from reminding us to leave for the airport to determining eligibility for social services and even detecting deadly diseases. This new era of digital decision-making has created a new challenge: ensuring that decisions made by computers reflect values like equality, democracy, and justice. We want to ensure that big data and automation are used in ways that create better outcomes for everyone, and not in ways that disadvantage minority groups.
The engineers and product managers who design these systems are the first line of defense against unfair, discriminatory, and harmful outcomes. To help mitigate harm at the design level, we have launched the first public version of our digital decisions tool. We created the tool to help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems.
About the digital decisions tool
This interactive tool translates principles for fair and ethical automated decision-making into a series of questions that can be addressed during the process of designing and deploying an algorithm. The questions address developers’ choices, such as what data to use to train an algorithm, what factors or features in the data to consider, and how to test the algorithm. They also ask about the systems and checks in place to assess risk and ensure fairness. These questions should provoke thoughtful consideration of the subjective choices that go into building an automated decision-making system and how those choices could result in disparate outcomes and unintended harms.
The tool is informed by extensive research by CDT and others about how algorithms and machine learning work, how they’re used, the potential risks of using them to make important decisions, and the principles that civil society has developed to ensure that digital decisions are fair, ethical, and respect civil rights. Some of this research is summarized on CDT’s Digital Decisions webpage.
How to use the tool
Each quadrant of the interactive graphic represents a different phase in developing an algorithm: designing, building, testing, and implementing. Clicking on each phase brings up subsequent steps in the development process. These steps can be addressed in any order that makes sense for the user, and provide access to a series of questions that allow a user to interrogate their project.
The tool does not prescribe right or wrong answers. This is partly because every project raises different risks and calls for different specifications, and partly because we don’t presume to know all of the right answers. The questions should encourage developers to think critically and methodically about whether their projects are designed to produce fair outcomes for everyone.
A work in progress
In the coming months, CDT will continue iterating, testing, and re-evaluating our digital decisions tool with different industries to improve its usefulness. We welcome feedback on how we can improve the tool to make it more user-friendly and responsive to real-world conditions.
This tool is not a panacea for algorithmic bias and disparate outcomes. The technology community, academics, and civil society must continue to engage in research, information sharing, and the development of technical tools to help mitigate the risk of discrimination and other harms in automated decision-making.
If you have questions or feedback about CDT’s digital decisions tool, please contact Natasha Duarte at [email protected].