Skip to Content

AI Policy & Governance, CDT Research, Free Expression, Privacy & Data

Explaining “Explainability”

by CDT intern Michael Yang.

Behind every social media feed is decades of computer science research. Platforms like Facebook, Twitter, and YouTube use algorithms called recommendation systems to prioritize which posts, images, and videos to show to users first. While such systems help users engage with and discover new, relevant content, social media recommendations are also heavily criticized for their inscrutability and possible harmful effects on society.

Though users currently don’t have much insight into how these systems work, it doesn’t have to be this way. Researchers in computer science, human-computer interaction, and psychology have been exploring concepts of “explainability” for years, looking at what types of descriptions count as explanations and how they can help people make sense of the world. Some of these concepts could help explain to users why they see what they see on social media.

Every algorithmically curated post, image, or video could include a description of why a user was shown that piece of content. In order to be useful to users, though, explanations have to fulfill two important criteria: psychological coherence and faithfulness. What might the explanations that meet this bar accomplish? With clear explanations, users could make an informed choice about whether they actually want to read, watch, or share recommended content.

RecSys, the world’s foremost computer science conference on recommender systems, defines recommendations as “a particular form of information filtering that exploits past behaviors and user similarities to generate a list of information items that is personally tailored to an end-user’s preferences.” Social media companies use recommender systems to algorithmically curate and select the best possible items for users to see in their feed. Commonly, recommendations are provided with little explanation, but studies show that users highly value the ability to understand and control their news or video feed recommendations.

Explaining recommendations could be a step toward helping users understand complex social media systems. But not just any explanation will do. First, the explanation has to be psychologically coherent. The psychology and social sciences literature makes clear that explanations should not just be a regurgitation of all of the variables used to arrive at a decision. Instead, an explanation has to pick out the several factors that distinguish one option from another.

In other words, the explanation should illustrate why a particular item was recommended over another. The comparison item could be either something that was almost recommended but wasn’t, or an item from a “baseline” recommendation scheme – like chronological sort. 

The factors highlighted by an explanation must not only be psychologically coherent, but also faithful to the underlying system. It’s not helpful to offer explanations that are divorced from how the recommendation was actually generated. And it is possible to offer plausible-sounding explanations that nonetheless are unfaithful.

In one of the earliest studies on explaining algorithmic recommendations for movies, the authors discovered that users prefer colloquial explanations, such as “Movie X stars a famous actor,” to faithful explanations. It would be too easy to deploy unfaithful explanations that convince users of their plausibility. Instead, designers of explanations must also take great care to optimize for user understanding over user satisfaction.

Even if explanations for social media recommendations are coherent and faithful, there’s no guarantee about the effect they will have. Will users take the time to read the explanations that they’re given? How will their engagement with recommended content change? Will it affect platforms’ business models? These are important questions to consider in the implementation of explanations and important topics for further research.

By explaining recommendations, social media companies are likely to increase the agency of their users. It may also bring transparency to other parts of social media, such as targeted advertising and content moderation. As the influence of algorithms in society increases, the need for explanations to understand those algorithms does too.