Published: Friday, September 15, 2017 @ 10:22 AM
By: Patrick McMahon, Rare.us
— Facebook’s online advertising tools allow advertisers to segment and target anti-Semitic Facebook users who identify themselves using keywords and queries like “how to burn Jews,” “Jew hater” and “History of ‘why Jews ruin the world,’” according to a new report by ProPublica.
Reporters located these audiences using Facebook’s own tools, then successfully placed ads targeting those users using the same methods advertisers would use to target any other interest or affinity on the world’s foremost social media platform.
Facebook approved three $10.00 ad buys from ProPublica reporters targeting people who listed “Jew haters,” “how to burn Jews,” and “history of ‘why Jews ruin the world’” on their profiles, even suggesting related terms like “Hitler did nothing wrong” to broaden the audience. The promoted post campaigns were approved in under 15 minutes, according to ProPublica.
Reporters took screenshots of the site as they placed the ads. After being contacted about the successful ad buys, Facebook officials said they would remove the audiences.
Shortly after the ProPublica report came out, Slate reported its own success placing similar ads targeting interests in terms like “Kill Muslimic Radicals,” “Ku-Klux-Klan,” “Jew Killing Weekly Magazine” and “Nazi Party (Canada).” The ads were approved in less than one minute, Slate reported, noting that the specific terms ProPublica used were no longer available.
Facebook did not deny the ProPublica report, but representatives contacted Rare.us to insist that the hateful terms were a product of descriptions users created, not a Facebook algorithm. Facebook spokesperson Rob Leathern also issued this statement:
“We don’t allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and (reviewing) processes to prevent other issues like this from happening in the future.”