ProPublica reported Thursday that it was able to use Facebook’s advertising platform to target users who had expressed interest in topics such as “Jew hater” and “German Schutzstaffel,” also known as the Nazi SS. And when ProPublica’s reporters were in the process of typing “Jew hater,” Facebook’s ad-targeting tool went so far as to recommend related topics such as “how to burn Jews” and “History of ‘why Jews ruin the world.’ ”

To make sure the categories were real, ProPublica tried to purchase three ads, or “promoted posts,” targeting those users. Facebook’s targeting tool initially wouldn’t place the ads—not because of anything wrong with the categories, but simply because the number of Facebook users interested in them was beneath its preprogrammed threshold. When ProPublica added a larger category to “Jew hater” and the others, however, Facebook’s ad tool reported that its audience selection was “great!” Within 15 minutes, the company’s ad system had approved all three ads.

Contacted about the anti-Semitic ad categories by ProPublica, Facebook removed them, explaining that they had been generated algorithmically. The company added that it would explore ways to prevent similarly offensive ad targeting categories from appearing in the future.

Yet when Slate tried something similar Thursday, our ad targeting “Kill Muslimic Radicals,” “Ku-Klux-Klan,” and more than a dozen other plainly hateful groups was similarly approved. In our case, it took Facebook’s system just one minute to give the green light.

Facebook ad targeting
Slate was able to place an ad that included the following targeting categories, among many others, with the help of Facebook’s algorithmic targeting tool.

Screenshot / Facebook.com

This isn’t the first time the investigative journalism nonprofit has exposed shady targeting options on Facebook’s ad network. Last year, ProPublica found that Facebook allowed it to exclude certain “ethnic affinities” from a housing ad—a practice that appeared to violate federal anti-discrimination laws. Facebook responded by tweaking its system to prevent ethnic targeting in ads for credit, housing, or jobs. And last week, the Washington Post reported that Facebook had run ads from shadowy, Kremlin-linked Russian groups that were apparently intended to influence the 2016 U.S. presidential election.

The revelation that Facebook allows advertisers to target neo-Nazis and anti-Semites comes at a time when it and other tech companies are under growing scrutiny for their role in facilitating online hate and white supremacy. As our colleague April Glaser recently reported, that change in attitude from previously permissive tech companies has begun to give rise to a sort of right-wing shadow web that embraces controversial, offensive, and even hateful speech.

But in the meantime, it’s clear that major platforms such as Facebook have big messes of their own still to deal with. Facebook’s ad network, in particular, still seems to embody an “anything goes” approach to targeting, despite fixing a few high-profile problems such as the housing discrimination option.

About an hour after ProPublica published its story Thursday, Slate was able to place its own ad on Facebook using similarly offensive search terms for audience targeting. Though the company had removed the specific terms mentioned in ProPublica’s search, it took only a few minutes to find myriad other categories of the same ilk that were still available on the company’s ad targeting tool.

Following ProPublica’s methods, we built an ad to boost an existing, unrelated post. We used Facebook’s targeting tool to narrow our audience by demographics, including Education and Employer. We found and included 18 targeting categories with offensive names, each of which comprised a relatively small number of users, totaling fewer than 1,000 people altogether.

As with ProPublica’s ad, Facebook’s tool initially said our audience was too small, so we added users whom its algorithm had identified as being interested in Germany’s far-right party (the same one ProPublica used). That gave us a potential audience of 135,000, large enough to submit, which we did, using a $20 budget. Facebook approved our ad one minute later.

Below are some of the targeting groups Facebook allowed us to use in the ad. Many were auto-suggested by the tool itself—that is, when we typed “Kill Mus,” it asked if we wanted to use “Kill Muslim radicals” as a targeting category. The following categories were among those that appeared in its autocomplete suggestions under the option to target users by “field of study:”

  • How kill jewish
  • Killing Bitches
  • Killing Hajis
  • Pillage the women and rape the village
  • Threesome Rape

Under “school,” we found “Nazi Elementary School.” A search for “fourteen words,” a slogan used by white nationalists, prompted Facebook to suggest targeting users who had listed their “employer” as “14/88,” a neo-Nazi code. Other employers suggested by Facebook’s autocomplete tool in response to our searches:

  • Kill Muslimic Radicals
  • Killing Haji
  • Ku-Klux-Klan
  • Jew Killing Weekly Magazine
  • The school of fagget murder & assassination

Some of these categories had just one or two members; others had more. The group who had listed “Ku-Klux-Klan” as their employer included 123 people. This seems to imply that, while Facebook’s ad tool prohibits too small a number of total users, by default it allows an ad to target groups as small as a single individual, as long as other, larger groups are also targeted.

Facebook did not immediately respond to Slate’s request for comment.

Future Tense is a partnership of Slate, New America, and Arizona State University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here