Skip to Content

AI Policy & Governance, Free Expression, Privacy & Data

A Closer Look at the Legality of “Ethnic Affinity”

ProPublica recently published a story criticizing Facebook for providing “Ethnic Affinity” categories among the options available to advertisers targeting ads on the platform. Specifically, ProPublica reporters revealed that it is possible to exclude a particular group from seeing ads, as well as to target ads to them. In some contexts, including offers of housing or employment, it is illegal for the person making the offer to exclude people based on race.

The “Ethnic Affinity” targeting option has been in the news periodically since it was introduced in 2014. Facebook does not collect information about race directly from users, nor is the “Ethnic Affinity” category it assigns to users necessarily linked to their race. Instead, Facebook assesses a user’s interests and “likes” to infer whether she has an affinity for products or services Facebook associates with a particular ethnicity. For example, if you follow pages about anime and martial arts, you may be categorized as having an “Asian American Ethnic Affinity”. (Read CDT’s general response to “Ethnic Affinity” targeting here to learn more.)

After the program received negative attention in the media, Facebook quickly published an official response defending their choice to provide “multicultural advertising options”. The distinction between race and “Ethnic Affinity”, however, has not quelled concerns about Facebook providing this option to advertisers, who may use it to target ads in a discriminatory, and in some cases illegal, manner. The Fair Housing Act (FHA), for example, makes it illegal to publish an advertisement for housing that discriminates based on race, sex, religion, or disability. Regardless of its legal obligation, there are policy and technical solutions that can mitigate risks of discriminatory ads hosted and targeted through Facebook’s platform, and Facebook should take action.

The Law: Section 230 and Third-Party Content

A number of laws are relevant to a discussion of whether something illegal is happening in this situation. While the FHA clearly establishes liability for the person who creates and posts the ad, a host of user-generated content like Facebook is likely protected by Section 230 of the Communications Decency Act. Section 230 shields internet intermediaries from liability for content that they did not author. In Chicago Lawyers Committee for Civil Rights Under Law v. craigslist, for example, the online classified ads site was protected from liability for ads posted by users that violated the FHA. This protection has proven essential to free expression in the digital age, enabling intermediaries to host huge amounts of user-uploaded content without pre-screening or filtering it.

A host can lose Section 230 immunity if it is responsible for “creating or developing” the illegal content, but this is a stringent standard. In Fair Housing Council of San Fernando Valley v. Roommate.com, one of the few cases where an intermediary did not have immunity under Section 230, the website had created an interface that required users to indicate a preference in gender, sexual orientation, and family status of potential roommates. (The court later held that individuals can indicate these preferences without violating the FHA.)

Regarding Section 230 protection, the court found that “[b]y requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of pre-populated answers, Roommate becomes much more than a passive transmitter of information provided by others; it becomes the developer, at least in part, of that information.” Short of a website forcing a choice or requiring a user to provide illegal content, however, Section 230 offers broad protection to sites that host third-party content, even if the availability of the platform means that some people will use it to break the law.  As ProPublica’s test ad (for an information session on illegally high rent) demonstrated, users can select Facebook’s “Ethnic Affinity” and “Interests: Buying a House” categories as a way to target a variety of content related to home-buying, not just offers for rental or sale.

What Facebook Should Do

Though they may not be legally liable, the categories Facebook is presenting to advertisers clearly have problematic uses, and there are steps Facebook can take to limit the ability of its advertisers to break the law. To do this, Facebook can make clear in their Advertising Policies what kinds of targeting are inappropriate, as well as set consequences for advertisers who target certain kinds of ads in a discriminatory way. Other actions that Facebook may take to protect its users from illegal advertisements include:

Ask advertisers to identify advertisements for housing, jobs, or credit products.

Advertisements for housing, jobs, and credit products expose people to economic opportunity as well as economic pitfalls. There are legal requirements for these ads because of their potentially negative impact on individuals. Facebook should ask advertisers placing ads that include offers of housing, jobs, and financial instruments to identify them as such, and should provide an easy technical mechanism for this. This will allow Facebook to add questions and steps to their review process and catch potentially discriminatory targeting of the campaign.

Increased review will be time-consuming and, initially, may seem to be an inefficient solution. However, asking advertisers to self-identify relevant ads will also allow Facebook to build a high-quality dataset of potentially problematic ads that can be used as training data for an automated detection system in the future. Additionally, building infrastructure for advertisers to flag content that may pose legal questions, and then using them as training data, is a concept that can be applied broadly and at scale.

Alert advertisers to their legal obligations.

Advertisers who violate the FHA or other civil rights laws can face larger consequences than removal of their ads from Facebook’s platform. Facebook’s Advertising Policies already alert advertisers that they must comply with applicable laws, but advertisers may not know exactly what their legal obligations might be. As we explained above, the protections afforded to Facebook under Section 230 do not apply to the advertisers themselves, who can be held accountable for discrimination as a result of targeted advertising. In particular, those who self-describe their ads as offering access to housing, jobs, or credit should be warned of the potential consequences of using the “Ethnic Affinity” categories (or other identity-related categories offered by Facebook) though a just-in-time notice to the advertiser.

Make it easy for users to flag advertisements that seem discriminatory.

Users should have the ability to report ads that appear to be discriminatory. Facebook should add a mechanism that allows users to report problematic ads, and they should keep de-identified information about ads that are confirmed to have discriminatory impacts. The dataset of problematic ads can be analyzed by Facebook for trends, common characteristics, and other identifiers that could form the basis of an automated flagging system, saving users from exposure in the first place.

Alert users when their “Ethnic Affinity” categorization is a reason they are seeing a piece of content.

Users may be best positioned to detect discrimination or bias in advertising targeted to them, and whether an ad is targeted on the basis of “Ethnic Affinity” is important to determining whether the content is appropriate or welcome. Facebook already provides some transparency into its advertising by including a “why am I seeing this?” option in a menu accessible on each post. When applicable, this option should always include use of “Ethnic Affinity” targeting. Additionally, Facebook should also consider proactively alerting users to the use of “Ethnic Affinity”, much like the recent, proactive notices Facebook provides on the privacy settings of posts or other content. This will allow Facebook to enable users who do not want to opt out entirely from this targeting to know when their “Ethnic Affinity” is being invoked.

Allow users to participate in their categorization.

“Ethnic Affinities” are inferred by Facebook and assigned to users without their consent or, probably, their knowledge. Facebook currently enables users to express that they do not want to receive ads based on their “Ethnic Affinity” through the ad preferences tool. While this is helpful, it is only available after a user has already been categorized for advertising purposes. Using inference to determine a person’s “Ethnic Affinity” means that users are not actively choosing to participate in this characterization, nor are they easily able to influence it. This is hugely problematic and makes users feel observed, uncomfortable, and treated unfairly.

Last spring, CDT conducted research with UC Berkeley indicating that users do feel slightly differently about personalization based on provided information than they do about personalization based on inferred information. Allowing users to provide or correct their “Ethnic Affinity” will improve some people’s perception of how fair this practice is, but it is insufficient to completely address the problems that can arise based on personalization on race. Facebook should also allow users to opt out of this categorization process entirely, rather than only allowing users to remove a preference once it has been assigned. For many users, the idea that Facebook is inferring this kind of insight is problematic regardless of what ads they are seeing.

Addressing Discrimination: Not Just the Right Thing To Do — It’s Good for Business

Online personalization is a contentious issue that merits Facebook’s attention. There are good reasons that people feel uncomfortable with the idea of online personalization on the basis of race, or proxies for it. Discrimination in advertising is associated with a history of segregation in the United States, and technology has the ability to amplify and/or mask discrimination online. On the other hand, there are a lot of nuanced issues, like proxies and positive use cases of race-related targeting, that are not addressed in the conversation about legality (as some have described in more detail).

Facebook’s business model relies on maintaining harmonious relationships with its users and its advertisers. Creating accountability and transparency around targeted advertising is good for business because advertisers will have more information about what users want to see, but it is also just a matter of principle. Allowing users to participate directly in a dialogue, and giving them tools to flag discriminatory content, will improve Facebook’s relationship with users and allow them to provide valuable insight to advertisers that can improve the quality of the service they offer to those customers.