Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

AI Bill of Rights a Good First Step Toward Improved Public Services

Blueprint could lead to safer and more equitable AI in education and other government services

In early October, the White House Office of Science and Technology Policy released its Blueprint for an AI Bill of Rights. Although much of the Blueprint focuses on commercial uses of AI, it is also a critical first step in articulating the potential harms that can result from the use of AI-based systems in the delivery of public services and pointing the way toward how we might work to mitigate those harms. 

One of the key public service sectors making increasing use of AI-based systems is education.  As CDT research has established and the Blueprint confirms, many of these uses pose risks to students. Student activity monitoring systems impact the well-being of students, particularly LGBTQ+ students who are at risk of being outed without their consent, and students of color, who are most impacted by the strengthening of the school-to-prison pipeline enabled by these monitoring systems. 

Remote proctoring systems make exams excessively difficult and stressful for students with disabilities and students of color, impacting their professional and academic opportunities. Dropout early warning systems, school assignment algorithms, and personalized learning systems are aimed at increasing academic opportunities for students but will do so only if they are implemented carefully and equitably. 

The situation is no less precarious for other government services. AI-based identification services for government benefits have locked people out of essential services in their moments of need. Automated systems that determine eligibility have limited access to critical benefits. While initially implemented to improve public service delivery, all of these uses of AI have had significant detrimental impacts on people’s lives.

AI-based systems need to be carefully designed to ensure they contribute to people’s overall well-being, rather than detract from it. The Blueprint calls out some of the key qualities and values necessary to develop positively impactful AI systems used in public service delivery. They include:

  • Efficacy as a value for AI-based systems: The Blueprint notes that systems should be effective as well as safe. Efficacy is particularly important in public sector contexts where stewardship of resources is an important operating principle, and spending money on ineffective systems is wasteful of public funds that could otherwise be used to improve people’s well-being. 
  • A strong stance against constant student activity monitoring: The section on data privacy specifically calls out that, “Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.”
  • Notice and alternatives in government service delivery: The Blueprint says that users should be notified about the usage of automated systems and that they have alternatives or options to appeal automated decisions. These two rights are especially important—and achievable—in the context of government service delivery.

The Blueprint recognizes that failure to take these steps can harm students and those who use government benefits. Consequently, it urges schools to critically examine “algorithms that purport to detect student cheating or plagiarism, admissions algorithms, online or virtual reality student monitoring systems, projections of student progress or outcomes, algorithms that determine access to resources or programs, and surveillance of classes (whether online or in-person).” Governmental agencies should also assess “[s]ystems related to access to benefits or services or assignment of penalties” for compliance with the Blueprint. 

In some respects, the Blueprint is meant to serve a similar role for equitable and effective AI systems that the Fair Information Practice Principles (FIPPs) have for privacy. The FIPPS have been valuable for improving user privacy by providing high-level guidelines to agencies to structure their data programs in ways that center users’ privacy needs, and CDT has drawn on FIPPS in its recommendations about responsible data usage in schools and public agencies. If the Blueprint is able to serve a similar function by helping agencies identify more equitable and effective approaches when adopting AI-based systems, it could be a boon for people who use or interact with AI-based systems. 

Although the Blueprint is an exciting first step towards equitable, effective, and safe AI systems, it is a first step. Much work remains to be done. Agency commitments made in response to the Blueprint are starting on that work. For example, the United States Department of Education (ED) has committed to releasing specifications on what constitutes safety, fairness, and efficacy when it comes to the use of AI-based systems in schools by early 2023. ED needs to make sure they focus on the detrimental effects that student activity monitoring and other AI-based technology can have, and how the use of this technology can run counter to the Blueprint. 

Additionally, agencies, including ED, need to be clear on how to protect against algorithmic discrimination, and what impacted populations can do when they face discrimination. Vagueness around standards like what constitutes “unjustified different treatment” could lower safeguards for students and other populations. 

The Blueprint has the potential to serve as a strong foundation for a more equitable and safe technological landscape, and we hope it is the beginning of a full-throated and concerted effort to start crafting that world. To that end, CDT plans to make recommendations to individual agencies about their use of AI for education and other government programs.