Skip to Content

AI Policy & Governance, CDT Research, Free Expression, Privacy & Data

“This is Transparency to Me:” User Insights into Recommendation Algorithm Reporting

Illustration depicting a pixelated user cursor reaching through the browser screen to affect the platform algorithm. CDT Research report entitled "This is Transparency to Me: User Insights into Recommendation Algorithm Reporting."
Illustration depicting a pixelated user cursor reaching through the browser screen to affect the platform algorithm. CDT Research report entitled “This is Transparency to Me: User Insights into Recommendation Algorithm Reporting.”

Recommendation algorithms by and large determine what people see on social media. Users know little about how these algorithms work or what information they use to make their recommendations. But what exactly should platforms share with users about recommendation algorithms that would be meaningful to them? Prior research efforts have looked into frameworks for explainability of algorithms as well as design features across social media platforms that can contribute to their transparency and accountability. We build on these prior efforts to explore what a recommendation algorithm transparency report may include and how it could present information to users.

Transparency reports have been on the rise among technology and social media companies, with 88 companies publishing such reports as of July 2021 (“Transparency Reporting Index – Access Now’s Global Database,” n.d.). These reports are the result of much advocacy from civil society and activist groups and primarily include information about content moderation practices and government requests for data (Vogus & Llansó, 2021). They rarely include information about service providers’ content recommendation algorithms, although these algorithms deeply impact the experiences of people who use platforms, as well as advertisers, public figures, and businesses.

Transparency reports are also usually generated periodically, with general information about the company and their practices. As the goal is to suggest a way for people to better understand how recommendation algorithms impact their personal experience, we suggest a more engaging data-driven, interactive, and personalized approach to recommendation algorithm reports. While some
information about recommendation algorithms and their use of data can be found in companies’ Privacy Policies and Settings pages, we argue that platforms should publish a stand alone report in which everyday users can learn about how personalized content and recommendation algorithms work and affect their online experiences.

In order to understand how to do so, and what such a report might include to best support everyday users’ needs, we conducted this two-part, human-centered co-design research project. Co-design is a method that involves end-users in multiple and early stages of the design process and builds on their insights to create tools that would be most meaningful for them. We conducted two sets of individual sessions with a diverse group of casual social media users (n=30) to understand what information they would like companies to share about recommendation algorithms and how that information should be shared.

In Study 1, participants were invited to participate in several design activities aimed at creating a reflective process about their needs and desires. The goal was to form a foundation of what everyday users are interested in and care about—the outcome of Study 1 was a set of guidelines for a recommendation algorithm transparency report as well as insights about features that can be incorporated in prototypes of future reports.

In the preparation for Study 2, we created sketches of screens (“prototypes”) of what a recommendation algorithm report could look like based on findings from Study 1. These prototypes were created as provocations—primarily intended to generate a second conversation with participants about their needs and values (as opposed to suggesting that this is how a report should look). In Study 2, the same participants were invited to examine the manifestations of their and other users’ own ideas, and to reflect on the strengths and drawbacks these prototypes suggest.

Based on the interviews from both studies, we develop guidance about recommendation algorithm reports: what they should include, what aspects they should emphasize, and how they should be communicated.

Research Findings

We present findings in two parts: (1) guidelines about what information should be included in recommendation algorithm transparency reports, and how it should be presented; and (2) initial suggestions, in the form of prototypes, about more engaging and interactive ways of presenting such information to users, with an evaluation of their strengths and weaknesses.

Participants primarily wanted a recommendation algorithm transparency report to include:

  • Information about what they do see, as opposed to what is being filtered out;
  • What data is collected and inferred about them to be used in recommendation algorithms, and whether and how that data is shared with external partners;
  • What data is obtained from other sources to be used in recommendation algorithms and from whom; and
  • Whether and how they can (or cannot) make changes to an algorithm and the data it uses.

They wanted the information presented to them to be:

  • Specific—to clearly explain the choices made by platforms when creating recommendation algorithms and how it would impact them, and to avoid general phrasing such as “to improve user experiences;”
  • Direct—to include data-driven, to-the-point information that does not attempt to frame platform recommendation systems or data collection practices in an overly positive light; and
  • Demonstrated—to include specific examples of how a given recommendation was made (e.g., “you are seeing this because you follow the cosmetic brand x”), attached to a more general description of how the recommendation algorithm works.

Study 2 introduced several ways to present recommendation algorithm transparency reports that may differ from what one might initially expect. We found that participants were particularly positive about transparency reports that were:

  • Visual—Information that was presented graphically allowed participants to gain and process lots of information about the topic by quickly skimming the report;
  • Interactive—Participants were excited about interactive designs that allowed them to “play around” with aspects of the algorithm to better understand its impacts;
  • Personalized—While not necessary in every report, participants viewed the personalized nature of prototypes as more engaging and more inviting than general information about how a recommendation algorithm works; and
  • Controllable—Once participants learned about how a recommendation algorithm worked, they appreciated when they were also able to exert control over it, especially around what personal data it incorporated.

In summary, this work provides the perspectives of everyday social media users and identifies their needs and values that future recommendation algorithm transparency reports should support. There is no single right way to provide an algorithmic transparency report, and, therefore, we are not offering a template or recommendation for how every social media platform should provide recommendation algorithm transparency.

Rather, these perspectives are critical for social media platforms to consider, as users have a right to choose what content they consume online. Implementing meaningful forms of recommendation algorithm transparency can increase people’s trust in platforms and contribute to a safer ecosystem of transparent and accountable social media platforms.

Read the full report here.

View the report’s research prototypes here.