Skip to Content

AI Policy & Governance, Privacy & Data

CDT Statement on Government Use of Algorithmic Decision-Making Tools to NYC Council Committee on Technology

October 16, 2017

Chairman James Vacca
New York City Council Committee on Technology
250 Broadway Suite 1749
New York, NY 10007

Attn: Zach Hecht, [email protected]

Re:  Committee Meeting on Automated Processing of Data for the Purposes of Targeting Services, Penalties, or Policing to Persons, October 16, 2017.

Dear Chairman Vacca,

On behalf of the Center for Democracy & Technology (CDT),[1] I write to offer recommendations for the City of New York to govern the algorithms it uses to make decisions affecting New Yorkers. Agencies at all levels of government are turning to automation and machine-learning algorithms to help make decisions that affect individuals’ rights and access to resources. In New York City, computer algorithms have been used to assign children to public schools,[2] rate teachers,[3] target buildings for fire inspections,[4] and make policing decisions.[5] These algorithms can process large amounts of data and uncover patterns or insights to drive decision-making. However, they are not neutral decision makers.

When governments use algorithms to make or assist with decisions, those algorithms become public policy, subject to public oversight.[6] This is true regardless of whether the algorithm is created by a government agency or a private vendor. The City of New York has an obligation to understand, scrutinize, and explain how its algorithms make decisions affecting New Yorkers.

At minimum, the city should ensure and demonstrate to the public that NYC’s algorithmic decision-making tools (1) are aligned with the city’s policy goals and the public interest; (2) work as intended; (3) do not use data to marginalize minority or vulnerable populations and exacerbate inequality; (4) provide meaningful transparency to New Yorkers so that they can appeal and seek remedy for automated decisions that are incorrect, unjust, or contrary to law.

1. The City of New York is responsible for ensuring that its algorithmic decision-making is aligned with the city’s policy goals and the public interest.

Agencies often rely on third parties to develop algorithms for use in the public sector.[7] Although these algorithms may be developed and maintained by private entities, their judgments still represent public policy. Public officials must be able to evaluate these models to ensure that they serve the City’s purposes and the public’s interest.

For example, city officials must decide how to distribute error in criminal justice algorithms. Many jurisdictions use risk assessment tools to help make decisions about people in the criminal justice system based on the projected likelihood that those people will commit future crimes.[8] These decisions can range from allocating resources to selecting parole candidates to determining sentences. Each algorithm will produce some error, but city officials must make policy decisions about the relative cost of false negatives (falsely classifying a high-risk individual as low-risk) versus false positives (falsely classifying a low-risk individual as high-risk).[9] These decisions must consider the type of prediction being made (e.g. violent crime versus non-violent crime), the consequences of a high-risk prediction (e.g. counseling versus a longer prison sentence), and the context in which the tool is being used (e.g. a trial versus an in-home social worker visit). This judgment will always require balancing of competing values, such as the desire to minimize risk of crime and the goal of giving each person an opportunity for rehabilitation. Policy makers must engage in careful balancing and not outsource policy decisions to vendors.

2. The City must ensure that its algorithms work as intended.

There is no shortage of companies ready to license their automated tools to government agencies, and city officials must be prepared to evaluate whether those tools actually meet the city’s needs. For example, the United States Department of Agriculture Food and Nutrition Service (FNS) has recommended that states use web-based automated tools to monitor illegal attempts to sell or solicit SNAP benefits online.[10] However, a Government Accountability Office (GAO) test of these tools found that they were impractical for detecting fraudulent social media posts because of technological limitations.[11] The tools could not detect geographic location information in posts, so states could not limit their searches to their jurisdictions, and the tools also used search methods that were not supported by social media platforms.[12] Ultimately, the tools were significantly outperformed by manual searches for SNAP fraud.[13] Before purchasing, recommending, or requiring the adoption of automated tools, city officials must ask whether the types of analysis those tools perform, and their technical capabilities, suit the city’s needs.

3. The City should avoid using data in ways that marginalize vulnerable populations and exacerbate inequality.

Algorithms use the examples in training data to make decisions or predictions in new cases. If the training data represents discrimination on the basis of race, gender, or other class distinctions, the resulting model may learn to invidiously discriminate. This has been well-documented in the criminal justice context, where algorithms are typically trained on police records reflecting law enforcement bias and disproportionate arrests and incarceration of black Americans.[14] Some algorithms have been found to produce less accurate results for people of color than for whites, possibly because people of color were underrepresented in the training data. This includes facial recognition algorithms used by law enforcement[15] as well as commercial tools for processing social media posts.[16]

It’s imperative that city officials understand the data used to train public-sector algorithms, where the data comes from, and how it might represent bias. Officials should also test—independently or with vendors—for potential discriminatory effects. If testing is conducted by the vendor, city officials should obtain documentation of how the tests were conducted, what potential biases were uncovered, and how the model was adjusted to mitigate bias.

4. The City should provide meaningful transparency to New Yorkers about how it uses algorithms to make decisions.

While most government policy is found in documents that the public can access and evaluate, policy contained in algorithms is often shrouded by trade secrets and hidden from public scrutiny. For example, like many states and localities, both New York State and New York City have adopted the “Value Added Model” (VAM) for evaluating and scoring teacher performance.[17] Law Professors Robert Brauneis and Ellen P. Goodman submitted a public records request to both the city and state to obtain information about the models.[18] While the city sent several letters stating that it needed more time to respond to the request, the state released a small number of sample outputs—not enough to understand or evaluate the model.[19] The contract between the New York State Education Department and the model’s vendor, the American Institute of Research, provided that the “methodologies or measures” provided by the contractor were “proprietary information” and could not be disclosed by the Education Department.[20] These broad trade secret provisions are common in government contracts for algorithmic tools and circumvent the traditional transparency function of open records laws.[21]

The City of New York can take several steps to provide useful information to New Yorkers about how algorithms carry out public policy. When city agencies license algorithms from third-party vendors, they should require the vendors to document how the algorithms work, the data they are trained on, the variables they consider, their accuracy and error distribution, how the algorithms have been tested and the results of those tests, and the steps the vendor has taken to mitigate invidious discrimination by algorithms. City officials should also push back against overly broad trade secret protection so that more information about public-sector algorithms can be obtained through public records requests or made available by the city.

When algorithmic decisions affect people’s fundamental rights or vital interests (such as financial interests), the city should be able to provide meaningful information about why the decision was made (e.g., what variables were material to the decision) so that people can effectively challenge decisions and seek remedies for harm.

I would like to thank the Committee for addressing this important issue and for considering these recommendations. Please reach out to me if with any questions or for assistance with future work on this issue.

Sincerely,

Natasha Duarte, Policy Analyst
Center for Democracy & Technology
1401 K Street NW Suite 200
Washington, D.C. 20005
202.407.8822
[email protected]

 

[1] The Center for Democracy & Technology (CDT) is a non-profit 501(c)(3) organization dedicated to protecting digital rights and the free and open internet. CDT’s Digital Decisions project advocates for more thoughtful and equitable approaches to big data and automation ((https://cdt.org/issue/privacy-data/digital-decisions/). Our Digital Decisions tool is designed to help engineers and policy makers assess and surface potential bias in algorithms (https://cdt.org/blog/digital-decisions-tool/).

[2] Amy Zimmer, High Schools Dole Out Misinformation About Admissions Process, Parents Say, DNAInfo (Nov. 15, 2016), https://www.dnainfo.com/new-york/20161115/kensington/nyc-high-school-admissions-ranking.

[3] Cathy O’Neil, Don’t Grade Teachers with a Bad Algorithm, Bloomberg (May 15, 2017), https://www.bloomberg.com/view/articles/2017-05-15/don-t-grade-teachers-with-a-bad-algorithm.

[4] Mayor Bloomberg and Fire Commissioner Cassano Announce New Risk-based Fire Inspections Citywide Based on Data Mined from City Records, NYC.gov (May 15, 2013), http://www1.nyc.gov/office-of-the-mayor/news/163-13/mayor-bloomberg-fire-commissioner-cassano-new-risk-based-fire-inspections-citywide#/6.

[5] See Brennan Center for Justice, Brennan Center for Justice v. New York City Police Department, brennancenter.org (May 19, 2017), https://www.brennancenter.org/legal-work/brennan-center-justice-v-new-york-police-department.

[6] See generally, e.g., Robert Brauneis & Ellen P. Goodman, Algorithmic Transparency for the Smart City, Yale J. L. & Tech. (forthcoming), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499&download=yes.

[7] See generally, e.g., Id.

[8] See, e.g., Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

[9] See Brauneis & Goodman, supra note 6, at 12-13.

[10] Gov. Accountability Office, GAO-14-641, Enhanced Detection Tools and Reporting Could Improve Efforts to Combat Recipient Fraud, Report to Ranking Member, Comm. on the Budget (2014), http://www.gao.gov/assets/670/665383.pdf.

[11] Id.

[12] Id. at 29–30.

[13] Id.

[14] See, e.g., Angwin et al., supra note 8; Julia Angwin and Jeff Larson, Bias in Criminal Risk Scores is Mathematically Inevitable, Researchers Say, ProPublica (Dec. 30, 2016), https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.

[15] See Claire Garvie, Alvaro M. Bedoya & Jonathan Frankle, Georgetown Law Center on Privacy & Technology, The Perpetual Lineup: Unregulated Police Face Recognition in America, 53-57 (Oct. 18, 2016), https://www.perpetuallineup.org/sites/default/files/2016-12/The%20Perpetual%20Line-Up%20-%20Center%20on%20Privacy%20and%20Technology%20at%20Georgetown%20Law%20-%20121616.pdf.

[16] See Su Lin Blodgett and Brendan O’Connor, Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English, 2017 Proceedings of the Fairness, Accountability & Transparency in Machine Learning Conference, https://arxiv.org/pdf/1707.00061.pdf.

[17] See Brauneis & Goodman, supra note 6, at 37–38.

[18] Id.

[19] Id.

[20] Id.

[21] See Taylor R. Moore, Center for Democracy & Technology, Trade Secrets & Algorithms as Barriers to Social Justice (2017), https://cdt.org/wp-content/uploads/2017/08/2017-07-31-Trade-Secret-Algorithms-as-Barriers-to-Social-Justice.pdf.