Skip to Content

Free Expression, Government Surveillance, Privacy & Data

Report – Making Transparency Meaningful: A Framework for Policymakers

CDT report, entitled "Making Transparency Meaningful: A Framework for Policymakers." Large black text underlined with three wide gradients of soft purples, green / blues, and reds – as if sunlight was reflecting through an open window.
CDT report, entitled “Making Transparency Meaningful: A Framework for Policymakers.” Large black text underlined with three wide gradients of soft purples, green / blues, and reds – as if sunlight was reflecting through an open window.

The term transparency is everywhere in policy debates over the responsibilities of technology companies and how best to regulate them. And for good reason. Tech companies have promised greater transparency, and lawmakers in the United States, Europe, and elsewhere have proposed legislation that would enhance or require transparency, often at the urging of civil society. Transparency can enhance public understanding of how technology companies operate and make them more accountable, whether through public pressure or legal constraints. It is offered as part of a solution to difficult problems raised by technology, from combating the spread of mis- and disinformation to reining in government surveillance to addressing discriminatory online advertising and more. 

But what exactly do we mean when we talk about transparency when it comes to technology companies like social networks, messaging services, and telecommunications firms? Transparency can take a variety of forms, and different stakeholders will find different types of transparency useful or important. And different forms of transparency give rise to varying technical, legal, and practical challenges.

This paper sets forth a conceptual framework for transparency about practices that affect users’ speech, access to information, and privacy from government surveillance. It maps and briefly describes current and past efforts at transparency in four distinct categories: 

  1. Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user generated content and government surveillance; 
  2. User notifications about government demands for their data and moderation of their content; 
  3. Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and 
  4. Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance. 

The purpose of the framework is to delineate the different ways that policymakers, civil society, the private sector, and the public are discussing technology company transparency in order to provide greater clarity about the potential benefits and tradeoffs that come with each form of transparency. Discussions of each of these four categories of transparency mechanisms often happen in parallel, as if each is entirely separate from the others. However, efforts to improve transparency must appreciate how the different forms are linked and where they differ, as well as the challenges and tradeoffs in enhancing each form of transparency through voluntary and regulatory interventions. 

For example, various forms of transparency could be helpful in combating the spread of disinformation online. Policymakers and the public could gain a better general understanding of how disinformation spreads, and who it affects, through company transparency reporting, but answering specific empirical questions about the patterns and consequences of disinformation will necessitate providing independent researchers with access to data to conduct their research. Both transparency reporting and researcher access to data approaches will need to grapple with defining “disinformation,” but transparency reporting requirements may also need to resolve how companies should count their content moderation actions on disinformation, while mandated researcher access to data about online disinformation may need to resolve how to provide access to sensitive data while preserving user privacy. More targeted interventions, to help individual users navigate and debunk disinformation online, will require user-centric transparency and careful thinking about what kind of information is useful and actionable to users. And any third-party analysis or evaluation of company performance in combating disinformation would need to proceed from clear and objective criteria—likely informed by the findings from transparency reports, user notifications, and independent research.  

The framework in this paper provides a structure for understanding the big picture of technology company transparency, and how to approach the critical decisions that must be made if we are to achieve meaningful transparency by, and about, technology companies.

Read the full report here.