The Zuckerberg Hearings: Hits, Misses, and Unanswered Questions
Written by Natasha Duarte
The House and Senate grilled Facebook CEO Mark Zuckerberg for a total of ten hours this week, covering privacy, content policy, and election interference. The hearings didn’t reveal new information about Facebook’s practices, but they suggested that many members of Congress are ready to move on from the status quo of weak privacy protections and unfettered data collection by companies in the U.S. Below, we break down some of the top hits, misses, and unanswered questions from the hearings.
1. Rethinking U.S. privacy laws
Many members indicated a desire to pass new privacy laws or to strengthen the Federal Trade Commission’s (FTC) authority to enforce privacy. This is the biggest advocacy for foundational legislative privacy protections we’ve seen in years. Members got Zuckerberg on the record agreeing to at least discuss privacy legislation with their offices. For example, Sen. Maggie Hassan (D-NH) got Zuckerberg to commit to work with Congress on ways to protect privacy and well-being “even if it . . . results in some laws that will require [Facebook] to adjust [its] business model.”
2. Weaknesses in FTC consent decrees
Several members, including Reps. Diana DeGette (D-CO) and Debbie Dingell (D-MI), pointed out that FTC consent decrees – settlement orders companies enter into when the FTC finds that they have engaged in unfair or deceptive practices – lack teeth because the FTC can only assess fines if the company subsequently violates the order. CDT has released recommendations for strengthening privacy and security consent decrees.
1. A lack of thoughtful questions with the depth and follow through necessary to address Facebook’s privacy problems
Though CDT and others published questions that lawmakers should ask Zuckerberg, many of this week’s questions were too shallow or uninformed to address the real privacy issues Facebook needs to grapple with. Zuckerberg fielded a lot of vague inquiries about user control, notice and consent, and selling user data to advertisers. These questions played directly into Zuckerberg’s talking points: Facebook does not sell data to advertisers and gives users controls for sharing content. For the most part, Zuckerberg was able to avoid addressing any legitimate privacy concern raised by his company’s platform, including:
- The collection and aggregation of information about people who do not have Facebook accounts, known as shadow profiles. Rep. Ben Luján (D-NM) pushed Zuckerberg on whether people without Facebook accounts can control how their information is collected and used, see what Facebook collects about them, or delete the data. Zuckerberg said he was not familiar with the term “shadow profile” and only referenced the controls that Facebook users have within their accounts.
- The downstream use of data by third parties that Facebook allows to access user profiles. Facebook says it’s taking more control over which apps can access user data through its API, but it’s unclear how Facebook intends to monitor or control what third parties do with user data once they have it (e.g., whether they disclose it to unauthorized parties, use it for unexpected purposes, or abide by data sharing agreement provisions such as deleting the data once it has been used). It also doesn’t address the issue of provisions in Aleksandr Kogan’s contract with Facebook that allowed the researcher to sell or transfer user data.
- The way platform design encourages unexpected data collection from (or disclosed by) friends. On a social media platform, the relationship between a user and the company is not the only one that affects privacy. Our friends can also leak our personal information – intentionally or not – and the design of a platform can make this more or less likely. Some of Facebook’s features were designed to encourage users to share their friends’ personal information without needing their friends’ consent. These features include opt-out default sharing settings and information that is required to be made public to use the service, sharing contacts when you sign up for Facebook’s services, and sharing call and text records with Facebook messenger. This information is used for features like People You May Know, which suggests friends and can reveal sensitive connections between people – such as therapist-patient relationships – that have not been intentionally disclosed to Facebook. Recently, PayPal settled a privacy complaint with the FTC alleging that Venmo users were allowed to share content in ways that violated their friends’ privacy settings.
2. The promise of AI tools for content moderation was overstated
Zuckerberg testified that Facebook is working on building AI tools to detect – and possibly to automatically remove – problematic content, ranging from bots to terrorist propaganda and hate speech. Some members put pressure on Facebook to use more AI for content moderation and to find and take down problematic content more quickly. Given the scale of user-generated content on platforms large and small, it makes sense to use automated tools to assist with human review of problematic content. But relying on automated content moderation also comes with risks such as over-censorship and disparate treatment of minority speakers. CDT’s report on automated content moderation explains the limits of these technologies and why they should only supplement, rather than replace, human review. In light of these limitations, it’s dangerous for policymakers to pressure platforms to rely on AI to remove content faster.
TO BE DETERMINED
1. Will Facebook continue to oppose (or attempt to weaken) state privacy legislation, such as the Illinois Biometric Information Privacy Act?
In the absence of strong federal laws, states have been active in putting forward privacy legislation, which Facebook has generally opposed. While Facebook announced this week that it would cease its opposition to a privacy-focused California ballot initiative, the company’s position on biometric privacy laws is less clear. Facebook-backed lawmakers continue to push an ill-conceived amendment that would completely undermine an Illinois law that protects biometric information and governs how companies can deploy facial recognition systems. Sen. Dick Durbin (D-IL.) asked Zuckerberg for the record how attempting to gut the law is consistent with Facebook’s promises to do a better job of protecting privacy. CDT joined other advocates in a letter strongly opposing the amendment.
2. Will Facebook work with civil rights organizations to address the potential for discriminatory outcomes from the company’s advertising and content moderation practices?
Sen. Cory Booker (D-NJ) and Rep. G.K. Butterfield (D-NC) pushed Facebook to work with civil rights organizations to audit the company’s platform and practices – for example, to ensure that advertisers cannot unfairly target or exclude minority groups.
3. Will Facebook apply GDPR protections to all of its users?
Several members, including Reps. Gene Green (D-TX) and Jan Schakowski (D-IL), asked whether Facebook extends to U.S. users the same privacy protections it is applying to EU users under the General Data Protection Regulation (GDPR). Zuckerberg answered “yes” to this question at least once, but at times it was unclear whether he meant that Facebook would apply all of its GDPR privacy protections worldwide, or only certain user controls.