By Ashley Lee, Berkman Klein Center for Internet and Society Affiliate, Harvard University and CDT Non-Resident Fellow alum, and Victoria Hsieh, Computer Science Undergraduate, Stanford University
Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT.
On October 30, the White House released an Executive Order on artificial intelligence, directing government agencies to prioritize the development and use of AI with an emphasis on safety, security, and trustworthiness. The executive order represents a significant step in the establishment of a comprehensive framework for AI, entrusting government agencies with a new set of obligations to put these guiding principles into practice. It is the culmination of years of advocacy and research conducted by various stakeholders, including civil society and academia.
The realization of safe and responsible AI largely hinges on nurturing a skilled workforce capable of developing, deploying, and governing these technologies in a safe, secure, and responsible manner. As AI and machine learning advance at a rapid pace, universities are responding to the urgent need for computing ethics programs and initiatives. In Responsible Work in Computing, a research initiative led by the first author, our research team has been collaborating with emerging technologists to investigate how universities can better prepare the next generation of AI technologists to practice responsible computing throughout their careers.
Until recently, ethics was not a core component of the training for computer scientists and AI technologists. This has changed rapidly in recent years. Arguably, we are now witnessing the rise of an “ethical tech” movement, with computer science departments across the nation starting to incorporate programs and initiatives that address the societal and ethical implications of computing and AI. These programs vary in structure and content.
Within AI and related domains, the concept of tech “ethics” has expanded to encompass a wide array of issues at the intersection of technology, power, and society. Engaging in AI ethics goes beyond equipping technologists with the necessary tools to address ethical challenges that arise in their respective professions. It also involves addressing broader structural questions about worker rights, workplace culture, workforce diversity and development, and other systemic conditions that underlie tech ethics issues. These inquiries surrounding tech and AI ethics are intimately linked to the broader structural imbalances that result from the concentration of power in Silicon Valley and other global tech hubs. So, what actions can universities and policymakers take to educate the next generation of AI technologists? Here are some key insights from our ongoing research.
Bridging the gap between AI ethics education and professional practice on the ground
Bridging the gap between AI ethics education and professional practice is crucial for fostering a more responsible AI workforce. While tech and AI ethics education within universities is on the rise, the instructional approaches frequently remain apolitical, abstract, and detached from professional practice. Incorporating ethical principles as a small part of a computer science course can leave students feeling like ethics is an afterthought rather than an integral aspect of their everyday professional practice.
Our discussions with emerging technologists highlighted the challenges they face when trying to apply the ethical principles they learn in the classroom to real-world scenarios. Navigating tech ethics issues in practice can prove complex and challenging, and technologists may face a range of barriers, including competing priorities, limited resources, and a lack of organizational support. To address these challenges, it is imperative to adopt interdisciplinary approaches that seamlessly integrate ethics into computer science and AI-related curricula, making them both technically and culturally relevant. For example, students shared that class assignments often ask them to explain the ethical implications of an algorithm (e.g., a hiring algorithm) at the end of a coding assignment. A more integrated approach could have students design an algorithm (e.g., for fair hiring) and explain how and why they made certain algorithmic design choices. This approach also allows students to explore who gets to make those design decisions in an organizational setting and learn about tools needed to shift power dynamics. As AI continues to intersect with various aspects of contemporary society, it is crucial to expand interdisciplinary discussions and educational initiatives related to AI, beyond computer science to a wider range of disciplines.
Beyond AI monoculture: Building diverse futures in the AI age
Building diverse futures in the AI age is a critical endeavor. It involves addressing the disproportionate harms marginalized communities may face in AI development, while also cultivating a more diverse workforce that brings a pluriversal perspective to bear on problems of AI monoculture. Diverse perspectives and voices bring a wealth of unique insights and ideas to the table. A diverse workforce has the capacity to take on a broader range of socio-technical challenges—and opportunities—that reflect the values and aspirations of a more inclusive and equitable society. This endeavor involves not only increasing representation of underrepresented groups, such as women, people of color, and members of the LGBTQ+ community, but also dismantling systemic barriers and biases that hinder their participation in education and the workforce. Moreover, it requires creating inclusive educational and work environments and promoting mentorship and support networks for underrepresented groups. Simultaneously, universities should foster an environment where students can bring their diverse values into the computer science classroom, rather than viewing computing as a domain solely dependent on technical prowess.
From automation to agency: Equipping technologists with a variety of tools for social transformation
Shifting from automation to agency is a key aspect of preparing the next generation of AI technologists. In our conversations, many young technologists share their perceived lack of ethical agency in the early stages of their careers. They express a desire to exercise more power and agency as they progress into senior roles or acquire greater expertise and responsibility. Their perceived lack of agency stands in stark contrast to their ambitions to tackle big questions and create big impact. AI ethics programs often focus on critique but can do more to empower students with a broader toolkit for social transformation.
When confronted with AI ethics challenges, many young technologists simply point to regulation as the primary remedy. Regulation is indeed a key lever in governing AI. However, regulation is also just one tool within a wide spectrum of available levers for shifting the socio-technical landscape of AI. AI ethics education provides aspiring technologists a wider variety of tools for social transformation. This includes experimenting with alternate design methods and processes, as well as fostering greater community and stakeholder engagement. For example, technologists are prioritizing a more deliberate approach to technology design by actively experimenting with “slow tech” processes—a departure from the “move fast and break” mentality. AI ethics education can incorporate alternative design methods that encourage students to go beyond traditional processes centered solely on algorithmic efficiency, while considering a broader spectrum of values that could inform the design process.
Towards a cultural transformation: Nurturing growing movements to reimagine AI
Young technologists in our study often overlook the power of collective action in challenging and reimagining dominant AI narratives and practices. To achieve a cultural transformation in the field of AI, it is crucial to harness the power of collective action and community-led efforts. Growing movements within the tech and AI community are advocating for responsible practices and a healthier ecosystem (the Design Justice Network is just one example). These movements draw participants from diverse backgrounds, including tech workers, activists, civil society organizations, and academics, among others. AI ethics education can play a significant role in empowering students to collectively reimagine AI practices and processes, and contribute to a cultural transformation that prioritizes ethical and responsible AI.
Special thanks to student researchers who have contributed to this research initiative: Anushree Aggarwal, Autumn Dorsey, Victoria Hsieh, Kate Li, Swati Maurya, and Sam Serrano. This research initiative has been financially supported by the Stanford Center on Philanthropy and Civil Society, Stanford Center for Ethics, and Harvard Berkman Klein Center for Internet & Society.