Skip to Content

AI Policy & Governance

Beyond ChatGPT: ‘Social Agents’ & the Policy Gap

Also authored by CDT Intern Emilie Grybos

Remember when it was strange and exciting that Siri could tell you a joke? Today, as Siri is nearly 12 years old, it’s no longer novel to talk to our devices as you might a friend. Indeed, technologies that leverage this kind of interactivity have integrated themselves into our day-to-day lives across the home, workplace, and public spaces. 

They are commonplace across a wide range of domains, whether we realize it or not: from customer service to finance, healthcare, and education. Just in the U.S., usage of voice-activated home smart speakers has increased year-over-year since 2017, with an estimated 100 million – or 35% – of the population in 2023 owning one. 

Computer scientists and other academics refer to these technologies as “social agents.” Social agents are the technologies that, in their design and application, foreground interactivity – hence they are “social” – in ways that explicitly aim to mimic human-to-human interaction. Social agents include voice-controlled “conversational agents” like Siri and Alexa, online “chatbots” like service chatbots and ChatGPT, and even physical robots. While currently widely deployed, they are still in their infancy for their projected capabilities across applications. 

Given the dynamism and intimacy of interactivity, this increasingly common tech poses tangible risks that warrant direct attention now. 

The defining promise of social agents – the intentional crafting of a social relationship – enables users’ feelings of emotional connectivity. This emotional connection is why social agents have promise, for example, in the healthcare domain; the cuddly robotic baby seal Paro is positioned as positively augmenting therapeutic caregiving in hospitals and long-term care facilities with its pet-like appearance and demeanor. Similarly, social agents are poised to become increasingly common as toys for children (a step beyond the Furby!), building “relationships” with kids who are learning emotional intelligence and social dynamics. 

With these proposed benefits come possible negative effects. Healthcare has long been understood as a uniquely risky domain and the risks of employing social agents like Paro in this setting have not yet been appropriately reckoned with. Unsurprisingly, social agents-as-toys for children puts children at increased risk; children in particular are at risk of struggling to distinguish between what’s advertising and what’s not when a social agent successfully mimics informal conversational behavior normal amongst humans.

This mimicry of social agents is predicated on anthropomorphism in either behavior or appearance (or both!). This lends social agents to novel risks that current regulatory attention around artificial intelligence (AI) more generally does not fully account for. Human-like presentations of these technologies have been shown to attenuate users’ privacy concerns. While both privacy and AI are valuable topics of discussion, it’s important to factor in the interactivity through the inclusion of social agents in this conversation. Given the unique and pressing risks that social agents present, it’s critical for policymakers to consider the implications of such nascent technologies. 

We are at a perfect juncture to foster collaboration at the intersection of social agents and policy: these agents are technically feasible and being deployed, so we can have concrete rather than esoteric discussions, but we are early enough in their development that we still have time to shape their future course. Technologists should proactively take responsibility for the development of these social technologies, with specific considerations around, for example, human rights like privacy incorporated into early design decisions. 

Policymakers can proactively consider questions around, for example, liability risk or classification (what could now be considered a medical device?), with insight from early deployments and projected applications. The perspective gained at this intersection can result in an informed agenda that takes a step back from sci-fi or existential imaginings to real-life risks and responsibilities. 

To help inform this discussion, we have been undertaking an ongoing systematic review of existing research on social agents at the intersection of policy and academic work. It’s clear that there’s an overlap between how academics understand the implications of their work and how policymakers are beginning to think about an increasingly complicated social world with non-human actors taking human-like roles. 

While a review of selected academic research papers is ongoing, some early trends have emerged, with notable subsets of papers undertaking detailed analyses of legal issues such as liability or intellectual property, data security and privacy concerns, and impacts on or incorporation of human rights. On the other hand, it was surprising to see very few discussions around other important and relevant topics such as accessibility, the intentional inclusivity of different levels of ability (e.g., disability rights), the ability to opt out of interactions altogether, and mobility and access considerations with robots interacting in physical, public spaces.

One lesson evident from the literature is the importance of fostering a shared and precise vocabulary to concretize discussions among technologists and policymakers. Definitional specificity can be lacking in conversations only framed around AI, which can conflate sci-fi with the technically feasible. Substantive literature looks at the intersection of AI, social agents, and policy implications (e.g. Ishii, 2019; Subramanian, 2017). 

The inclusion of a social agent framing – such as a chatbot – would provide more contextualization for policy for technology with social capabilities by foregrounding the mechanisms of interactivity. While currently a legally undefined category, chatbots that employ AI in medical or legal professional contexts or in interactions with children warrant particular attention that is not fully addressed in policy proposals such as the EU AI Act. (Leaua & Didu, 2021). 

Given the proliferation of science fiction stories that perpetuate existential fears around fully agentic, superintelligent robots, it’s not surprising that the invocation of robots and policy would lend itself towards more ethical or philosophical discussions around “robot rights” (e.g., Ashrafian 2015; Robertson, 2014; Sharkey 2017). This is indicative of how social agents, through their social interactivity, force reflection on what it means to be human. That in turn should be concretely used to reinforce the importance of human rights right now, such as those begun by the Rathenau Instituut’s report and Mpinga et al.’s review of human rights in research on AI.

Because they’re predicated around the development of the interfaces that enable interaction, social agents both pose unique risks while also grounding conversations of the novelty and threat of AI to the innate human-ness of interactivity. The literature on social agents is positioned to substantively further existing discussions for both technologists and policymakers – a critical relationship especially given that these technologies are still very much in progress. Through this ongoing review, then, we aim to contribute to this relationship-building and knowledge transfer.