Skip to Content

Privacy & Data

Facebook Should Reform Its “Real Name” Policies

2015-10-06-FB-person

Today, CDT, along with several other advocacy organizations, including the ACLU of Northern California, EFF, and Access, sent a letter to Facebook urging them to reconsider their “real name” policy that has led to criticism from a variety of groups. While Facebook’s policy is an attempt to correlate one’s online identity with an objective, non-constructed self, you don’t need a doctorate in sociology to understand that in our world, all identities are constructed. I myself use one name professionally that’s different from the name I use with my friends, and my family sometimes calls me by Hindi diminutives meaning “younger son” or (to my chagrin) “little fatty.” But there’s no way that one account name on Facebook could encompass all my different selves — or that choosing one would promote any objective sense of an “authentic self.”

You don’t need a doctorate in sociology to understand that in our world, all identities are constructed

Facebook’s stated policy is to require users to use the name they use in “real life”; when reports alert Facebook that a username isn’t in compliance with the policy, the account may be flagged or suspended. Promoting online safety, as Facebook intends, is a laudable goal — using a “real life” name can help prevent harassment and threats. But Facebook’s method of achieving it has serious flaws. We have long had concerns about the existence of this policy and the potential for abuse that it creates, and we believe Facebook should modify or eliminate it — as other major platforms have done. Process improvements could go a long way to minimizing consequences for particular groups such as political activists; transgender and gender variant people; victims of domestic violence; Native Americans; and members of the clergy — all of whom have been disproportionately negatively affected by the policy.

Facebook states that its policy exists for two primary reasons: first, because it’s unique to the company, as opposed to other services that allow for anonymity or pseudonymity; and two, for safety reasons. Yet we question whether the policy, as currently implemented, is the best way to achieve those ends. While the use of real names can promote safety in some instances, it can also imperil the safety of other users (like political dissidents or domestic violence survivors). Achieving that balance is difficult, but it’s not clear that the balance tips in favor of real names to promote safety, or that this policy is the least restrictive method to protect users without potentially harming others. We believe that modifying real name policies is a more equitable, if not perfect, way forward — one that we hope Facebook adopts. Facebook’s unique policy is increasingly an outlier — other service providers have abandoned real name policies, recognizing that it can be unclear and lead to “unnecessarily difficult experiences for some users,” as Google noted when it rescinded its real name policy.

The coalition letter provides specific improvements that would create real improvements for Facebook users around the world. Reporting sprees — in which a user reports multiple accounts, potentially for malicious reasons — can silence users through account shutdowns. Targeted account shutdowns have been reported across the globe. And even when those wrongful reports have been reversed, in some instances accounts have been reinstated with a person’s legal name, which may expose at-risk users to online and offline targeting from former abusers, repressive governments, and political opponents. Facebook could minimize these problems by committing to allowing pseudonyms and non-legal names in appropriate cases; requiring substantiation for claims of account abuse to minimize the likelihood of abuse through reporting; and allowing for a more effective appeals process for users whose accounts are shut down. We hope that Facebook, like its peer platforms, decides to allow users to control their own digital identities and takes steps to protect users from malicious abuse reports and other methods of harassment.