Skip to Content

AI Policy & Governance, CDT Research, Free Expression, Privacy & Data

“This is Transparency to Me” – Research Prototypes

Illustration depicting a pixelated user cursor reaching through the browser screen to affect the platform algorithm. CDT Research report entitled "This is Transparency to Me."
Illustration depicting a pixelated user cursor reaching through the browser screen to affect the platform algorithm. CDT Research report entitled “This is Transparency to Me.”

In a recent report by CDT, This is Transparency to Me: User Insights into Recommendation Algorithm Reporting, we looked into opportunities for social media platforms to share reports that would include information about their recommendation algorithms and how they may impact people who interact with them. 

The report describes two research studies that we conducted: (1) a co-design study that included interviews and design activities and resulted in an understanding of aspects related to recommendation algorithms that people want to know more about; (2) an evaluation study in which we presented to participants a range of possible prototypes of future social media algorithmic transparency reports and asked them to reflect on these hypothetical interfaces. 

In this repository we share the final prototypes that we created and presented to participants in Study 2—a total of four different “interface-like” images, that simulate what a transparency report on a hypothetical social media platform may present. In two of the prototypes (Prototype 2 and Prototype 3) we include two images that represent different instances of the same screen (e.g., after a user hypothetically clicks on a tab that the interface includes). 

The prototypes were designed through an iterative design process, and based on our insights from Study 1. They were not necessarily intended to be a template for what recommendation algorithm transparency reports should include. Rather, they were a tool of inquiry into aspects that people might or might not be interested in in such a report. Thus, to fully understand the insights based on participants’ reflections of these prototypes and research, we recommend reading the full report findings.

This repository includes a total of six image files that present the following prototypes: 

  • Prototype 1: The Quantified Self recommendation algorithm transparency report [PNG]
    Alt text / description of image: The figure of the prototype begins with a short paragraph of text that explains “How We Tailor Your Experience.” It then details hypothetical text that the user had shared, like name, location (city) and age. Below, there is a bubble graph that shows how the user spent their time – the larger the bubble, the more time spent. Lastly, there is a section entitled “By the Numbers,” in which the prototype shares the user’s hypothetical behavior on the platform with numbers emphasized, such as “1,103 Pieces of Content Engaged With” and “431 comment threads you participated in.”
  • Prototype 2: Inferences in a recommendation algorithm transparency report [PNG]
    Alt text / description of image: The figure of prototype 2 presents the view if the user was to click on the “Your Inferences” tab. This is a short description of why inferences are made for social media recommendations. Then, there are 14 different circles that present different topics on which inferences were made by the platform, such as “Locations” and “Political affiliation.” Location is currently selected, and on the right side of it there is text about what the platform hypothetically knows about the user on that topic. The text reads: “You have accessed our website from Italy, the UK and Mexico but we think that you live in the UK and visit Spain and Italy frequently for business, based on your job location, events you attended, friends you engaged with, and posts you read.”
  • Prototype 2b: Another view of prototype 2 (inferences in a recommendation algorithm transparency report) [PNG]
    Alt text / description of image: This is another view of prototype 2, but this time, the selected inference is “political affiliation.” As a result, the right hand side shares the following information with the user: “You have posted or engaged with quite a bit of content that we have tagged and analyzed as related to political affiliation. We determine this from analyzing the tone, the kinds of content, and the subject of the conversations you were engaging in.” Under this text, the user sees a bubble chart that suggests that they have mostly engaged with democratically leaning content.
  • Prototype 3: Recommendation algorithm report Plug and Play feature [PNG]
    Alt text / description of image: The page presented begins with a short bulleted paragraph with information about how the algorithm works. Then, the majority of the page presents a “Plug and Play” feature – on the left, the viewer can see information that they have hypothetically given the platform, such as their location, gender and age. These fields each have a small “edit” button next to them. On the left, there are squares that show a range of content that has been shown to the user based on their information – for example, an ad about the MoMa museum, or text from a group they are in. When the input variables are changed on the left, the content examples alter accordingly.
  • Prototype 3b: Another view of prototype 3 (recommendation algorithm report Plug and Play feature) [PNG]
    Alt text / description of image: This prototype is similar to prototype 3, but shows another view, once the user hypothetically changes their location from New York, USA, to Buenos Aires, Argentina. Because of this change, the content on the right is slightly different, and the content is adjusted to the user’s new location selection.
  • Prototype 4: Data movement on and off the Platform for recommendation algorithms [PNG]
    Alt text / description of image: The last prototype that was presented to users. The page begins with a title: “How Does Your Data Move From Our Platform to Our Partners,” with a paragraph of text describing that process. The next title is “How Your Data is Processed and Shared (or Not),” and underneath it the rest of the page shows a graphical representation of the different data types that are acquired from third parties (such as websites you visited and ads that you clicked on) and data that is shared with others from the platform. The data types are arranged in different categories: data that is publicly shared, shared with trusted partners, shared with specific users by request, and never shared. Participants can move the different data types about them (like inferences of their political affiliation or ad interaction history) from one category to another, and in this way determine to what extent their data can be shared with third parties.