Digital Decisions

Introduction

Algorithms play a central role in modern life, determining everything from search engine results and social media content to job and insurance eligibility. Unprecedented amounts of information fuel engines that help us make choices about even mundane things, like what restaurant to visit. Institutions use data to anticipate our interests, determine what opportunities we are afforded, and steer us through digital environments. There are countless automated processes behind each recommendation, prediction, and result — but how are these decisions made?

The designers behind data-driven analytics decide what data to collect and include and what criteria are relevant to the process. Although this sophisticated statistical analysis is a pillar of society in the 21st Century, the technical processes behind the decisions are not transparent to users or regulators. The question of how to regulate algorithmic decision-making and related data practices has implications for all aspects of our lives, including economic opportunity, well-being, and free speech.

CDT is working with stakeholders to develop guidance that ensures the rights of individuals, encourages innovation, and promotes incentives for responsible use of automated technology. Building on principles established by the civil rights community, CDT’s guidance for ethical use of automated decision-making technology helps translate principle into action for private industry. We believe that responsible use of data could do more than mitigate unintentional discrimination–it could help address deep-seated cultural bias that contributes to systemic inequality intended to be addressed by civil rights law. This is an opportunity to demystify digital decision-making to create principles for advocates and dynamic and responsive criteria that can be implemented by industry.

Top      Next

I. Past is Prologue

In the summer of 2014 Ben Bernanke, former chair of the U.S. Federal Reserve, was denied a mortgage. Since stepping down from his government post earlier in the year, Bernanke had been able to command a reported $250,000 for giving a single speech and had signed a book contract estimated to be in the seven figures. Yet when he and his wife sought a mortgage to refinance their house in the District of Columbia, a house whose value according to tax records was dwarfed by his likely income in the next couple of years, they were turned down. As the New York Times explained, “Ben Bernanke…is as safe a credit risk as one could imagine. But he just changed jobs a few months ago. And in the thoroughly automated world of mortgage finance, having recently changed jobs makes you a steeper credit risk.” The numbers were crunched and the decision was made—Mr. Bernanke was denied.

Presumably, the former Fed chair found a banker willing to look more closely at his application and reconsider. But how do less well-resourced individuals fare when decisions about credit and other matters of economic consequence are largely arbitrated by technical systems outside of our control?

The modern world is full of computers crunching numbers on decades of people ‘like us’ and drawing conclusions about our fate. In a world powered by big data, past is prologue.

Prev      Top      Next

II. What You Need To Know

Almost every sector of the economy has been transformed in some way by algorithms. Some of these changes are upgrades, benefiting society by predicting factual outcomes more accurately and efficiently, such as improved weather forecasts. Other algorithms empower tools, such as Internet search engines, that are indispensable in the information age. These advancements are not limited to traditionally computer-powered fields. Algorithms can help doctors read and prioritize X-rays, and they are better and faster than humans at detecting credit card fraud. Wall Street fortunes depend on who can write the best trade-executing algorithm. Songwriting algorithms can replicate the styles of legendary composers. Algorithms are everywhere and automation is here to stay.

However, automated decision-making systems powered by algorithms hold equally broad potential for harm. Some of the most crucial determinations affecting our livelihoods—such as whether a person is qualified for a job, is creditworthy, or is eligible for government benefits—are now partly or fully automated. In the worst case scenario, automated systems can deny eligibility without providing an explanation or an opportunity to challenge the decision or the reasoning behind it. This opacity can leave people feeling helpless and discourage them from participating in critical institutions.

Additionally, automated decision-making systems can have disproportionately negative impacts on minority groups by encoding and perpetuating societal biases. A 2014 big data report commissioned by the White House concluded that “big data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace.” If data miners are not careful, sorting individuals by algorithm might create disproportionately adverse results concentrated within historically disadvantaged groups. Current laws governing fair credit, equal opportunity, and anti-discrimination may not be adequate to address newer ways of ranking and scoring individuals across a range of contexts. For example, the concept and practicality of redress may be meaningless if an individual does not even know she is being assessed—much less what the criteria are.

There is an ongoing conversation among experts in many different fields who are working to develop techniques and principles to mitigate these risks. But the complexity of the technology and the diversity of contexts in which it is used add up to a very complicated problem. This piece breaks that problem into digestible parts to allow readers to get acquainted with the issue at their own speed. It concludes with a proposed solution–an interactive tool that prompts data scientists or programmers with questions designed to reveal and mitigate biased design structures.

Prev      Top      Next

III. Quick Definitions of Key Concepts

Algorithms are essentially mathematical equations. However, unlike mathematical equations you may be familiar with from primary school, algorithmic outputs do not necessarily represent a ‘right answer,’ defined by an objective truth. Imperfect data sets and human value judgements shape automated decisions in intentional and unintentional ways. To understand how this works, it is important to understand some of the basics of algorithms, machine learning, and automation.

Wait, what is an algorithm?

In its most basic form, an algorithm is a set of step-by-step instructions—a recipe—“that leads its user to a particular answer or output based on the information at hand.” Applying its recipe, an algorithm can calculate a prediction, a characterization, or an inferred attribute, which can then be used as the basis for a decision. This basic concept can be deployed with varying degrees of sophistication, powered by the huge amounts of data and computing power available in the modern world. Algorithms take large amounts of information and categorize it based on whatever criteria the author has chosen.

What is machine learning?

Computers are able to process very complex algorithms and very large inputs in microseconds, producing what can be opaque and often significant algorithmic decisions. Some algorithms use a process called machine learning. Machine-learning algorithms identify patterns in existing data and use those patterns as rules for analyzing new information. Machine learning can amplify trends that might have been unseen by the researchers. In these cases, researchers “train” the computer using existing datasets, or training data, before setting out to predict results from new data. The process of gleaning insight from unwieldy amounts of information is called data mining. In his presentation to the FTC, Solon Barocas explained that the purpose of data mining is, “to provide a rational basis upon which to distinguish between individuals and to reliably confer to the individual the qualities possessed by those who seem statistically similar.”

Some algorithms are designed to predict a future outcome. Designing a predictive algorithm involves coming up with a definition of success and choosing target variables that will bring it about. For example, when designing an algorithm that sifts through job applications to recommend hires, success could mean saving money, hiring a diverse group of employees, or any number of other metrics. The definition of success determines the target variables—the thing the algorithm will actually try to predict. If success means saving money, and employee turnover costs money, then a good hire may be defined as one who is likely to stay at the company for a long time (so the target variable would be longevity).

Target variables get further broken down in a process called feature selection. This is where programmers decide what specific criteria they will prioritize to sort, score, or rank cases. For example, the qualities that determine whether an employee will stay at a company long-term may include the amount of time a person stayed in his or her previous job.

Prev      Top      Next

IV. How Can Algorithms Be Biased?

Rather than unbiased alternatives to human subjectivity, algorithms are imbued with the values of those who create them. Each step in the automated decision-making process creates possibilities for a final result that has a disproportionately adverse impact on protected or vulnerable classes. These steps include designing, building, testing, and refining the model. While the design stage presents multiple opportunities to introduce bias, it also presents opportunities to prevent unintended bias and ensure fairness.

Returning to our example of automating hiring–Suppose the training data shows that employees who live closer to work tend to stay in their jobs for longer than employees who live farther away. A model relying on these patterns could disproportionately reject, and systematically disadvantage, people who live in rural areas. (In fact, analytics company Evolv declined to include this data in their hiring model because of concerns it would have a discriminatory impact.)

Likewise, the selection of the training data can lead to disparate impacts. Machine-learning algorithms identify trends based on statistical correlations in the training data. However, algorithms can only predict the future based on the past—or more specifically on whatever data about past events is on hand. Because of this, the results can unintentionally be discriminatory or exacerbate inequality. For example, if training data for an employment eligibility algorithm consists only of all past hires for a company, no matter how the target variable is defined, the algorithms may reproduce past prejudice, defeating efforts to diversify by race, gender, educational background, skills, or other characteristics.

The characteristics of the data itself can skew predictive models. Data that was collected under biased conditions—or for a purpose unrelated to the goal of the algorithm—may not accurately represent the relevant population. For example, the city of Boston released a smartphone app called StreetBump, which used the app users’ GPS data to automatically report potholes to the city, so that they could be repaired. However, since the city was only collecting this pothole data from smartphones, it was under-representative of lower-income neighborhoods, where fewer people own smartphones. The data collection method was unintentionally perpetuating unequal distribution of public services.

Machine-learning algorithms adjust based on new feedback, eventually making decisions based on criteria that may not have been explicitly chosen by human programmers. Absent careful auditing, this process can mask unintended bias. Hard evidence of algorithmic discrimination is somewhat difficult to come by. Discovering algorithmic bias often requires a fortuitous revelation such as Dr. Latanya Sweeney’s accidental discovery that searching her own name prompted ads for an arrest record to be served while searching traditionally white-sounding names did not—or time-consuming forensics.

Many of these issues are not new–discrimination and bias are not modern inventions. However, the scope and scale of automated technology makes the impact of a biased process faster, wider spread, and harder to catch or eliminate. That is why there is an emerging field specifically dedicated to analyzing the role technology is playing in perpetuating or amplifying historically advantaged populations or points of view.

Prev      Top