Skip to Content

Cybersecurity & Standards, Privacy & Data

Standards for Artificial Intelligence Can Shape a More Secure and Equitable Future

Authored by CDT Summer Intern Dominic Bosco

Artificial intelligence (AI) today is exceedingly powerful. It can generate realistic faces of people that don’t exist. It can take the faces of people that do exist and make them say things they did not actually say, such as our world leaders singing John Lennon’s “Imagine” together. As the capabilities of artificial intelligence continue to grow, so too does public excitement and government interest in setting standards that will allow AI to flourish. In this post, we’ll first discuss the landscape of AI and then how standards for AI might help government, business, research, and society better grapple with an automated future.

Background on AI

“Artificial intelligence” has become a buzzword, with news articles either touting it as our ticket to the future or decrying it as an existential threat to humanity. Yet, amidst all this, you might be left wondering: what exactly is AI?

When we imagine artificial intelligence, our minds might jump to futuristic robots that are eerily human-like, or super-intelligent computers ruling the world. AI exists in many forms, however, all of which are quite tame compared to dystopian visions of robot supremacy. AI has a variety of definitions and applications, adding to the confusion. In terms of current uses, MIT’s Technology Review provides a useful definition: “It is the quest to build machines that can reason, learn, and act intelligently.” To see how broad this definition is, consider a sudoku puzzle. It seems reasonable to say that, to solve this puzzle, a human would have to “act intelligently.” That is, randomly writing down numbers is unlikely to work. So, if you write a computer program that solves sudoku puzzles, you now have an artificial intelligence.

This may feel underwhelming, as solving sudoku puzzles is an admittedly narrow kind of intelligence. However, computers are much better at this kind of task than humans. This is precisely the compelling power of artificial intelligence: it can supplement or replace human intelligence and decision-making in tasks that require careful pattern analysis, prediction, and weighing of future outcomes. This kind of AI exists in technologies you interact with daily: artificial intelligence decides what results your search engine shows you, recommends music to you on Spotify, places targeted ads and content on your Facebook newsfeed, and powers translation software.

AI has the potential to profoundly change the world we live in, enabling a better future. With increasingly powerful hardware and sophisticated algorithms, artificial intelligence will be able to empower humans to do more and live fuller lives. Examples of this hopeful future exist already. Farmers, long subject to the whims of nature, are using artificial intelligence to decide how to distribute resources and determine optimal harvesting times. Artificial intelligence can detect lung cancer and diagnose childhood conditions with remarkable accuracy, systems which may one day be used by doctors and physicians. We will soon be able to translate speech in real time, breaking down the barriers of language that can, at times, divide us. And Microsoft developed an app to help the visually impaired navigate their surroundings, bringing us closer to a world that is accessible to everyone.

However, when left unchecked, artificial intelligence also has the potential to create significant harm. It can replicate social biases, reinforce injustice, and exacerbate inequality. This can happen when AI systems are built on incomplete or biased data, or when the creators of these systems fail to consider the potentially disparate impact of their models. The consequences are severe, written in terms of human lives and opportunities. For example, software used across the country to sentence and predict future criminals has been shown to be biased against black people. Another study demonstrated that self-driving cars struggle to detect dark-skinned pedestrians, leaving people of color to potentially pay for the mistakes of AI with bodily harm.

The implications can be even more dire when governments and institutions of power abuse the capabilities of AI. Already, China is using artificial intelligence to profile and target an ethnic Muslim minority. And artificial intelligence is already being implicated in threats to democracy, for helping spread disinformation and distorting trust in media through the creation of deepfakes — fake videos created using AI techniques to portray events that never occurred. The potential downsides of AI are frightening and difficult to predict. What is highly likely, however, is that the burden of these ills will fall disproportionately on people of color and marginalized communities.

Standards for AI

Against this tumultuous but exciting landscape, President Trump signed an executive order in February titled “The American AI Initiative,” which provides first steps towards establishing a national policy and roadmap for artificial intelligence. While the executive order is lacking in certain key respects, as CDT has previously discussed, it generally recognizes the dual nature of AI: that it can be both a tool towards transformative good and a mechanism of oppression. The former is reflected in the executive order’s call to prioritize funding for AI research and development. And, while the executive order does not explicitly outline the dangers posed by AI, it does address the latter in calling for AI regulation that upholds and protects “civil liberties, privacy, American values, and U.S. economic and national security.” To this end, it specifically calls upon the National Institute of Standards and Technology (NIST) to issue, within 180 days of the executive order, a plan for “federal engagement in the development of technical standards and related tools in support of reliable, robust and trustworthy systems that use A.I. technologies.” As an initial step towards drafting this plan, NIST put out a Request for Information to gather information about the development of technical standards for AI and the role of the federal government in this process.

CDT submitted a comment in response last Monday. We believe that standards will be critical in ensuring that artificial intelligence is innovative while still serving the public good. Without regulation and standards for its responsible use, it will be impossible to mitigate the harmful consequences and reckless exploitation of artificial intelligence. When profit and power are the only motives, artificial intelligence will continue to be used to serve the bottom line and cement authority, while disregarding the implications for human lives. Well-crafted standards, however, would ensure that AI technologies are developed in line with widely held ethical principles and social norms. They would serve as guidelines to help protect civil liberties, guarantee equal access to AI technologies, prevent abuse, and generally assure that AI advances the cause of our shared humanity. At the same time, standards could further innovation by facilitating collaboration and exchange of knowledge between AI research labs, as well as promoting the interoperability of AI technologies. In the process of crafting any standards, however, it is crucial that organizations outside the private sector — academia, nonprofits, the government — continue to be involved. Market forces alone will not lead to standards that adequately address the social and ethical challenges posed by AI.

Our comment highlights some important areas of consideration in attempting to regulate artificial intelligence. Thought must be given to how we collect, store, and talk about data. Many AI systems are built on masses of information collected about real people. We need standardized methods for collecting and securing this data in order to protect individual privacy rights. In addition, there is currently no framework for understanding and comparing the datasets used to build different AI systems. This makes it difficult to know what is in a dataset, what the dataset was used for, and why. To remedy this, we suggest standardizing the concept of “datasheets for datasets,” a framework by which dataset creators can provide dataset documentation, with information such as intended use case, necessary maintenance, and known biases. This would help developers choose the best dataset to use in building a particular AI system. It would give policymakers, as well as auditors and purchasers of AI technologies, a better way of comparing the quality and limitations of datasets and the AI models created from them. Overall, it would result in AI systems that are more transparent and trustworthy.

We also suggest that AI systems should be audited regularly, to evaluate them against legal and social norms. This process of consistent evaluation will be crucial in preventing and mitigating the potential harm created by inaccuracy and bias in artificial intelligence systems. To this end, we recommend the creation of a standardized auditing framework. Audits should be detailed and comprehensive assessments of AI systems, analyzing all aspects of the system’s implementation. To facilitate this analysis, there should be standards for transparency in the development and use of AI systems, outlining the information creators of these systems must disclose. The development stage of AI systems should take these standards into consideration, and integrate auditability into their design. If AI technologies are designed with accountability in mind, they are less likely to have unintended and harmful consequences.

These consequences often exist in terms of disparate impact: certain individuals are unfairly disadvantaged due to protected characteristics such as race, gender, or sexual orientation. As an example, imagine AI-powered job recruitment software that, for whatever reason, disproportionately rejects applications from female candidates. New AI technology should be tested for these disparities. Currently, however, there is little public knowledge around effective disparate impact testing methods. The research, development, and standardization of such methods should be a priority. Disparate impact testing will help us detect and prevent discrimination and injustice in AI.

In the immediate future, NIST will draft its plan for federal engagement in artificial intelligence standards-setting by the first week of July 2019. The outline for the draft shows NIST’s plan will be a high-level discussion of existing tools and standards, priorities for the involvement of the U.S. government, and  what the future of AI standardization might look like. NIST will open the draft to a two-week public comment period to gain input from the AI community. Hopefully, with the final draft of this plan, the government will begin to think critically about the realities of artificial intelligence and take first steps towards AI governance. With luck, their plan will initiate an ongoing conversation about our future with artificial intelligence, and will encourage industry and academia to develop the tools needed to evaluate the social impact of AI technologies and mitigate any harm. A secure and equitable future depends on it.