Skip to Content

AI Policy & Governance, Privacy & Data

Facebook Settlement Shows FTC Will Not Limit How Companies Use Our Data, so Congress Must

The Federal Trade Commission (FTC) has announced a wide-ranging settlement with Facebook stemming from the Cambridge Analytica scandal that emerged 494 days ago. The new Order, which includes the long-expected $5 billion fine, primarily imposes accountability mechanisms on Facebook’s leadership and requires extensive internal procedures for creating, changing, and overseeing products. 

While the Order will certainly boost the amount of thinking about data privacy that goes on within product teams and among managers at Facebook, the order does not put in place any substantive limitations on Facebook’s collection, use, or sharing of personal information — neither between Facebook affiliates like Instagram and WhatsApp nor with outside parties. There are two narrow exceptions. First, the Order prohibits Facebook from using phone numbers collected for multi-factor authentication for advertising. Second, in response to the confusing rollout of Facebook’s facial recognition controls, where the company shifted from “tag suggestions” to a binary on/off switch for face tracking, the Order requires new “clear and conspicuous” notice of Facebook’s use of facial recognition and that it obtain affirmative express user consent before using the technology in materially new ways. Considering all that’s happened over the last few years, this is a shockingly short list of clear directives from the FTC.

The majority of the Order is instead focused on new accountability mechanisms to ensure Facebook follows whatever rules it comes up with internally — and that the FTC has better “information flows” to monitor compliance. 

Much of this is accomplished by imposing new independent obligations at the board level. Facebook will establish a new Independent Privacy Committee, amending its corporate charter such that a two-thirds majority of voting shares will be required to remove members. This supermajority requirement is designed, according to the Commission majority, to “significantly diminish[] Mr. Zuckerberg’s power—something no government agency, anywhere in the world, has thus far accomplished.” This Independent Privacy Committee will then be responsible for designating compliance officers, whose duties are largely modeled after the EU General Data Protection Regulation’s requirements for independent data protection officers. These officers will be responsible for undertaking privacy review statements for (1) new products, (2) data-sharing with affiliates like Instagram or Oculus, and (3) uses of sensitive information, including children’s data, financial data, or health and biometric information. 

Certifications of compliance abound in the Order. For example, Facebook’s privacy officers will be required to ensure independent, third-party apps and services certify compliance with the company’s platform terms, in an attempt to address the heart of the Cambridge Analytica problem. In the event Facebook finds that a third-party app or service accesses or uses the personal information of more than 500 Facebook users in violation of those platform terms, prompt reporting to the FTC is required. Annually, these privacy officers, as well as Mark Zuckerberg, will then be required to make certifications to the FTC in much the same way that corporate officials must certify compliance under the Sarbanes-Oxley Act of 2002. 

CDT’s own recommendations include oversight mechanisms, but they only make sense as part of a much more meaningful data protection regime. Unfortunately, the Order is limited in its ability to put in place firm rules and limitations on how Facebook ultimately uses and shares the information it collects. Falling back on “privacy review statements” cannot address this concern. In a dissenting statement, Commissioner Rohit Chopra identifies the root of the problem with risk assessments:

“The order does not prohibit the integration of the platforms; it requires only that Facebook designate the integration as a potential user risk. It does not require users to consent to the integration; it requires only that Facebook describe its consent procedures, “if any.” It does not limit what constitutes an acceptable level of risk to users; it requires only that the risks be documented. It does not require that Facebook eliminate or even minimize these risks; it requires only that it describe a process for mitigating them.”

The FTC should have gone further. But as FTC Chairman Simons acknowledged, the FTC is “not acting pursuant to comprehensive privacy legislation like the GDPR” but rather Section 5 of the FTC Act, which “was never intended to deal with privacy issues like the ones we address today.” He is not wrong.

We will see if the FTC’s new Order can significantly impact the culture around privacy and security at Facebook, but the real takeaway from this decision should be to understand the very real limitations that the FTC faces when it comes to policing privacy. The Commission split 3-2 in its decision, ultimately on the question of whether the Commission should have taken Facebook to court to “get more” or whether a protracted court battle would have resulted in less than what the FTC ultimately got. 

This is a real problem, and it goes to the crux about why CDT and others have been calling for a comprehensive federal privacy law. Today’s settlement asks the public to trust that Facebook can make the right decision when it comes to collecting data about children, using our faces for products and services, and processing our health and financial information. But we should not have to trust Facebook. Instead, we — and Congress — ought to determine what the rules of the road will be moving forward. We join the FTC in saying that if one takes issue with this Order, the solution is for Congress to fix the problem.