Skip to Content

AI Policy & Governance, CDT AI Governance Lab

It’s (Getting) Personal: How Advanced AI Systems Are Personalized

This brief was co-authored by Princess Sampson.

It’s (Getting) Personal: How Advanced AI Systems Are Personalized. White and black document on a grey background.
It’s (Getting) Personal: How Advanced AI Systems Are Personalized. White and black document on a grey background.

Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search engines and research databases now by default offer AI-generated summaries of hundreds of results relevant to a query, productivity software promises knowledge workers the ability to quickly create documents and presentations, and social media and e-commerce platforms offer in-app AI-powered tools for creating and discovering content, products, and services.

Many of today’s advanced AI systems like chatbots, assistants, and agents are powered by foundation models: large-scale AI models trained on enormous collections of text, images, or audio gathered from the open internet, social media, academic databases, and the public domain. These sources of reasonably generalized knowledge allow AI assistants and other generative AI systems to respond to a wide variety of user queries, synthesize new content, and analyze or summarize a document outside of their training data.

But out of the box, generic foundation models often struggle to surface details likely to be most relevant to specific users. AI developers have begun to make the case that increasing personalization will make these technologies more helpful, reliable, and appealing by providing more individualized information and support. As visions for powerful AI assistants and agents that can plan and execute actions on behalf of users motivate developers to make tools increasingly “useful” to people — that is, more personalized — practitioners and policymakers will be asked to weigh in with increasing urgency on what many will argue are tradeoffs between privacy and utility, and on how to preserve human agency and reduce the risk of addictive behavior.

Much attention has been paid to the immense stores of personal data used to train the foundation models that power these tools. This brief continues that story by highlighting how generative AI-powered tools use user data to deliver progressively personalized experiences, teeing up conversations about the policy implications of these approaches.

Read the full brief.