Skip to Content

Free Expression

New CDT Report Provides Guide for Policymakers on Making Transparency Meaningful

In 2020, the Minneapolis police used a unique kind of warrant to investigate vandalism of an AutoZone store during the protests over the murder of George Floyd by a police officer. This “geofence” warrant required Google to turn over data on all users within a certain geographic area around the store at a particular time — which would have included not only the vandal, but also protesters, bystanders, and journalists. 

It was only several months later that the public learned of the warrant, because Google notified a user that his account information was subject to the warrant, and the user told reporters. And it was not until a year later — when Google first published a transparency report with data about geofence warrants — that the public learned the total number of geofence warrants Google receives from U.S. authorities and of a recent “explosion” in their use. New York lawmakers introduced a bill to forbid geofence warrants because of concerns they could be used to target protesters, and, in light of Google’s transparency report, some civil society organizations are calling for them to be banned, too.

Technology company transparency matters, as this example shows. Transparency about governmental and company practices that affect users’ speech, access to information, and privacy from government surveillance online help us understand and check the ways in which tech companies and governments wield power and impact people’s human rights. 

Policymakers are increasingly proposing transparency measures as part of their efforts to regulate tech companies, both in the United States and around the world. But what exactly do we mean when we talk about transparency when it comes to technology companies like social networks, messaging services, and telecommunications firms? A new report from CDT, Making Transparency Meaningful: A Framework for Policymakers, maps and describes four distinct categories of technology company transparency:

  1. Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user generated content and government surveillance; 
  2. User notifications about government demands for their data and moderation of their content; 
  3. Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and 
  4. Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance. 

Different forms of transparency are useful for different purposes or audiences, and they also give rise to varying technical, legal, and practical challenges. Making Transparency Meaningful is designed to help policymakers and advocates understand the potential benefits and tradeoffs that come with each form of transparency. This report addresses key questions raised by proposed legislation in the United States and Europe that seeks to mandate one or more of these types of transparency and thereby hold tech companies and governments more accountable.

Transparency Reports

Transparency reports are public reports of aggregate data and qualitative information about actions an online service provider takes that can affect users’ privacy and free expression. These reports may include information about government demands for user data or content restriction, information about the provider’s own content moderation decisions, and other topics. While transparency reports published by tech companies are most common, other entities, including governments, may also issue transparency reports on issues that impact users’ speech and privacy. 

Several bills introduced in Congress would require technology companies to publish transparency reports, particularly about their content moderation decisions. One example is the Platform Accountability and Consumer Transparency (PACT) Act, which would require larger online service providers to publish biannual transparency reports with particular data about their enforcement of their content policies. 

In Europe, Article 13 of the Digital Services Act (as currently proposed by the European Commission) would require providers of intermediary services to publish yearly transparency reports focused on information about government orders, notices submitted via the DSA’s notice and action mechanism, services’ content policy enforcement, and complaints from users.

Making Transparency Meaningful analyzes several key questions raised by mandatory transparency reporting, such as what kinds of information in transparency reports can actually help enhance companies’ and governments’ accountability to the public. For example, policymakers concerned about government surveillance of internet users may want to see transparency reports that include the number of requests for user data a company receives, broken down by the type of legal demand and the government entity that issues it, as well information about the company’s response. They may also want to require government entities, such as courts or law enforcement, to issue reports about the demands for user data that those entities issue.

We also discuss practical considerations concerning transparency reporting, such as how data in transparency reporting should be categorized and counted and how to mitigate the impact of transparency reporting requirements on smaller and startup companies. Current legislative proposals grapple with these same questions: the PACT Act would exempt smaller providers, defined by consumer usage and revenue thresholds, from its transparency reporting requirement. Making Transparency Meaningful describes different methods of distinguishing small companies from large ones and other approaches that do not rely on such distinctions, such as implementing high-level principles for transparency, rather than detailed metrics, that all companies could meet.

User Notifications

Making Transparency Meaningful discusses three types of notifications from technology companies to users that most strongly impact — and can help protect — user privacy and speech: 1) Government demands for user data; 2) legal demands for content removals or restrictions; and 3) content moderation decisions by companies. 

Many current legislative proposals would require companies to notify users about their content moderation decisions. For example, the Algorithmic Justice and Online Platform Transparency Act would require online platforms to disclose information to users about the algorithmic processes the platforms use to recommend or withhold content from users, as well as a description of their content moderation practices. (The bill also has a separate transparency reporting requirement.) 

Mandating user notifications about content moderation decisions comes with tradeoffs. More information is not always better; the frequency of notifications and level of detail required to be included will impact whether notifications help users understand why their content has been moderated. In addition, user notifications about content moderation may be counterproductive in some instances, such as notifications that inform spammers about how and why their content has been moderated in a manner that enables them to evade moderation in the future. Evaluation of bills proposing mandatory user notifications, like the Algorithmic Justice and Online Platform Transparency Act, must consider these tradeoffs.

Researcher Access to Data 

Independent researchers, public policy advocates, and journalists seek access to data from hosts of user-generated content in order to investigate scientific or other academic questions, publish news or analysis, and inform advocacy and policymaking. In the wake of recent controversies over platforms providing flawed or inadequate access, or prohibiting access by specific researchers, lawmakers have proposed requiring platforms to provide researchers with access to certain data from technology companies. 

In the United States, the Social Media Disclosure and Transparency of Advertisements Act of 2021 (Social Media DATA Act) and Platform Accountability and Transparency Act (PATA) provide examples of different approaches to researcher access to tech company data. The former is focused on requiring access by academic researchers to digital advertising data held by consumer-facing platforms; the latter establishes a process through which large hosts of user-generated content would be required to share data with university-affiliated researchers whose projects have been approved by the National Science Foundation. In Europe, Article 31 of the Digital Services Act as proposed by the European Commission would require very large online platforms to provide access to data necessary to monitor and assess compliance with the DSA to certain vetted researchers. 

As we describe in Making Transparency Meaningful, mandating researcher access to data held by hosts of user-generated content requires resolving questions such as: Who should get access and to what types of data? How do we safeguard user privacy while granting access? What is the best mechanism for providing researchers access to data from companies?

Lawmakers are confronting all of these questions as they consider whether and how to require researcher access to data held by hosts of user-generated content. For example, the DSA’s proposed Article 31 would limit access to researchers affiliated with academic institutions, while the committee leading the European Parliament’s work on the DSA has proposed expanding access to also include researchers affiliated with vetted not-for-profit bodies, organizations, or associations. In another example, the Social Media DATA Act would provide access only to advertising data (which may raise fewer concerns about user privacy but also limits the type of research that can be done), while PATA casts a wider net and would allow access to a variety of data of potentially great value to researchers (while also potentially creating greater risks to user privacy). Understanding these tradeoffs is crucial, as these decisions will shape the kinds of research possible going forward.

Assessments and audits

Analysis of a technology company’s business practices, whether in the form of forward-looking risk assessments or backwards-looking audits, are another key mechanism for accountability. While transparency may not be the main purpose of risk assessments or audits, if a public report of an assessment or audit is published, it can offer insight into how a technology company operates and its impacts on the speech and privacy rights of users and communities. 

Article 28 of the European Commission’s proposed Digital Services Act would require certain very large online services to undergo formal yearly audits evaluating their compliance with various requirements in the Act; companies would also be required to publish the audit reports and a report on their implementation of any recommendations in them. In Making Transparency Meaningful, we explain several key considerations relevant to the DSA’s auditing requirement. For example, the DSA requires that auditors be independent and have proven expertise, and have proven objectivity and professional ethics. We suggest potential criteria and processes for establishing such independence, expertise, and ethics. (The committee leading the European Parliament’s work on the DSA has proposed adding additional criteria that auditors must meet.) 

Policymakers also have to consider what information from an audit or assessment should be made public, balancing the need to protect privileged or trade secret information on the one hand with the need to reveal enough information so the public can understand and evaluate the conclusions of the audit or assessment on the other. Such considerations are directly relevant to the DSA’s requirement that yearly audit reports and audit implementation reports be published, and the minimum standards the DSA establishes for what each report must contain.

Each form of transparency discussed in this framework — transparency reports, user notifications, access to data held by private companies for independent researchers, and public-facing analyses, assessments, and audits of technology company practices — holds unique promise and poses unique challenges. Making Transparency Meaningful is a resource for policymakers and advocates striving to address those challenges. While there are no easy answers to the tradeoffs raised, asking the right questions will lead policymakers and advocates to the best path for truly making tech transparency meaningful.