Skip to Content

AI Policy & Governance, CDT Research, Free Expression, Privacy & Data

New CDT Research to Work with People to Co-Design Social Media Algorithmic Transparency Reports

Recommendation algorithms on social media can significantly impact users’ life experiences, both on- and offline. They can limit information that people are exposed to, negatively shape individuals’ body images, and influence offline behaviors like voting. Such potential negative consequences emphasize the need for more transparency and access to information about how these algorithms work.

Recommendation algorithms are informed, in large part, by people’s own personal data, behaviors on platforms, information from third parties, and even from data purchased from data brokers. But not much detail about recommendation algorithms and how they work is shared with the broader public: What do content recommendation algorithms take into consideration? How are they trained? What errors might they introduce? What personal data is being collected from users to “feed” algorithms? Technology companies are not very transparent about the answers to these and other questions.

Following much advocacy from civil society, it has now become common practice for technology companies to produce semi-regular transparency reports for the general public about their responses to government demands for user data and content removal and, more recently, about their content moderation practices. To date, 88 tech companies have published such reports publicly. This is a big win, as more transparency can help the public and lawmakers hold platforms accountable for how they operate, as well as help their users understand how they work. Yet transparency reports almost completely lack information about how algorithms are used and implemented, regardless of their importance and the consistent interest expressed by the media, the public, and even the government

Companies do make some information about this topic available, primarily within a company’s Privacy Policy, in part, due to requirements under the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and similar laws. Other valuable information can be found in the depth of companies’ Settings pages — for example, Facebook shares information about what data it uses to decide which ads to present to an individual, and explains specific inferences it had made about an individual in its Ad Settings page. This kind of information is hard to find and is usually not designed, in phrasing and accessibility, to be easy for people to understand and interact with. Rather, current algorithmic transparency practices might discourage people from attempting to better understand how they work.

Social media platforms have expressed a couple of reasons for not fully disclosing how their recommendation algorithms work. One is that being transparent about that may encourage scammers, spammers, and trolls to “play” the algorithm for higher reach. Another is the competitive advantage of keeping the mechanisms of an algorithm secret. But even if platforms did want to share more about their algorithms, it is not always that easy — explaining how algorithms work is objectively challenging, and for many years has motivated academic researchers and civil society activists to pursue the question of how best to provide algorithmic transparency and explainability. Social media companies themselves sometimes have admitted they do not fully understand, for example, the potential biases in their algorithms, and have reached out to the broader research and hacker communities for help in identifying algorithmic harms.

In a new research project at CDT, we aim to provide insight into the aspects of social media content recommendation algorithms that platforms can and should share with their users. We take a human-centered design research approach, in which we include a diverse set of research participants — everyday users of social media platforms — as co-designers of a future “algorithmic transparency report.”  Some of our goals in this work include forming an understanding about:

  • What aspects of data and recommendation algorithms people care about most,
  • How they would want to know more about them,
  • Which algorithmic practices should not only be reported, but accompanied by data points on an annual or bi-annual basis; and
  • Whether some aspects of algorithmic transparency reporting should also be personalized to individual users.

Using design research is ideal in a situation of unknown unknowns — it can reveal hidden and surprising aspects that have not yet been discussed. It also allows for the broader community to participate in designing technology that is intended for them. We will use several methods (Co-Design, Card Sorting, and Experience Prototyping) to collaborate with everyday users towards several outcomes:

  • Foregrounding the conversation about the need for more algorithmic transparency on social media and its potential value to end-users;
  • Reporting an overview of the information that is already publicly shared on social media, and how it is perceived by people who use their platforms. We anticipate that some information could be adjusted and included in future Algorithmic Transparency Reports; and
  • Providing concrete design recommendations regarding aspects of social media content algorithms that people care about, and examples for how these aspects should be shared.

We need a better understanding of people’s perspectives on recommendation algorithm transparency and explainability, and concrete ideas to guide progress in this area. Through our forthcoming research, CDT hopes to provide such insights and recommendations, and suggest new models for algorithmic transparency that help foster accountability and improve understanding by industry, researchers, civil society, and government stakeholders.