As YouTube marked its 20th anniversary on Feb. 14, a new report has shed light on the social-media platform’s failures to enforce “Advertiser-Friendly Content” policies that have effectively monetized antisemitic content on the platform.

Research undertaken by CyberWell, a tech nonprofit focused on monitoring and combating the spread of antisemitism, in addition to Holocaust denial and distortion online, found major gaps in the enforcement of YouTube’s policies, revealing that 24% of verified anti-Jewish videos in English were monetized with ads.

The research examined hundreds of AI-flagged videos containing blatant Jew-hatred, as well as antisemitic tropes and conspiracy theories posted in English and Arabic. Conducted over the second half of 2024 and delivered to YouTube alongside enforcement recommendations, the study focused on videos posted between October 2023 and 2024.

Tal-Or Cohen Montemayor
Tal-Or Cohen Montemayor. Credit: Courtesy,

“As YouTube celebrates its milestone anniversary, it must take greater responsibility for the content that appears on its site and take immediate action to curb the spread of antisemitism and bar the monetization of hate and harmful content,” said Tal-Or Cohen Montemayor, founder and executive director of CyberWell.

CyberWell uses AI technology to monitor posts consistent with the International Holocaust Remembrance Alliance’s (IHRA) working definition of antisemitism. Each post is individually vetted by the nonprofit’s analysts and submitted to social-media platform moderators alongside relevant community guidelines and hate-speech policies the individual post violates (sometimes referred to as “Trust and Safety”).

CyberWell’s findings show that 24% of the English-language videos analyzed and a concerning 36% of the Arabic-language videos were monetized with ads. This monetization represents a direct financial incentive for YouTube’s parent company, Google and YouTube creators to facilitate the production and amplification of hate content.

While most of the content examined violated YouTube’s guidelines, even its own hate-speech policy, only a fraction—less than 11%—was removed after being reported by users. (According to YouTube’s “Advertiser-Friendly Content” policy, content that disparages, humiliates or incites hatred against individuals or groups based on their race, religion or ethnicity is not eligible for monetization.)

This falls well below YouTube’s average removal rate for online antisemitism, which CyberWell documented as 32.1% in its 2024 annual report. The study further illustrated significant gaps in YouTube’s enforcement of its own policies in detecting and removing antisemitic videos that violate the company’s community guidelines.

Content creators often circumvented YouTube’s automated detection systems, which rely heavily on voice-recognition technology, by using techniques like overlaying text or using visual aids to avoid detection. Some users merely posted disclaimers claiming that their content is not affiliated with hate groups before posting anti-Jewish content.

Cohen Montemayor said “YouTube’s algorithms appear ill-equipped to recognize the full range of antisemitic rhetoric, particularly when expressed through images or subtle language. The lack of precision in identifying the full range of Jew-hatred has led to worrying gaps in the enforcement of its policies against antisemitic content and in a failure to protect brands on their platform.”

In response to the findings, CyberWell issued several recommendations. Among them are calls to rigorously enforce the platform’s hate-speech policy; improve detection of religious antisemitism; and implement stricter safeguards on monetized content. The report also suggests that YouTube consider adopting new methods of detecting antisemitic rhetoric in video thumbnails, images and written disclaimers, which have been employed to bypass the platform’s automated systems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here