What do discussions about Hamas, Kanye West, and Covid have in common? Well, they seem to trigger antisemites.
The conflict between Palestinian terrorist groups and Israel, the pandemic, vaccinations, lockdowns, and celebrity antisemitic statements were some of the prominent causes of an increase in antisemitic messages on Twitter and other social media platforms.
The pandemic disrupted nearly everyone’s life, causing millions of deaths worldwide. It was also a time when people spent more time than usual browsing the Internet and engaging in social media posts. Many individuals delved into rabbit holes, where they discovered and contributed to wild conspiracy theories about nefarious actors. Antisemitism is never that far from this kind of thinking.
The pandemic helped unleash an uninhibited wave of online antisemitism, with Jews being blamed not only for Covid, but also for immigration, culture wars, and even racism, settler colonialism, and imperialism from those on the opposite political spectrum. Such accusations are often thinly veiled as “anti-Zionism,” whatever that may mean for those who oppose a Jewish state.
All of this can be observed in real-time on social media. However, the challenge lies in comprehensively monitoring the millions of messages across various platforms and languages. Moreover, it is even more difficult to determine the impact when stereotypical comments about Jews evolve into something more harmful, forcing Jews out of online or offline spaces, or posing physical threats. In other words, when does an antisemitic message, or even a seemingly positive stereotype such as casually praising Jews for their exceptional intelligence, become a threat?
Exact figures for the number of antisemitic messages sent over the Internet every day remain unknown, but it is likely in the hundreds of thousands, if not millions, across all platforms and languages.
On Twitter alone, focusing only on English conversations explicitly mentioning the word “Jews,” we estimated over 4,000 antisemitic tweets per day in 2020.
This number has increased in 2021, both in absolute numbers and in the proportion of antisemitic messages within conversations about Jews. When people talk about Jews on Twitter, roughly six to 20 percent of such conversations are found to be antisemitic, depending on the time period. The highest peak was observed during the violent conflict between Israel and Hamas and Palestinian Islamic Jihad in May 2021. Fortunately, there are also many Twitter users who object to antisemitism and denounce it in their messages.
Our estimations are based on representative samples of live tweets, which we classify using the International Holocaust Remembrance Alliance (IHRA)’s Working Definition of Antisemitism. This definition serves as a useful guide to identify if a message likely conveys antisemitic content.
The manual classification of samples is an important step toward semi-automated monitoring of online antisemitism.
Automated detection has made significant advancements in recent years, thanks to the development of deep learning techniques. The Network Contagion Research Institute and the Institute for Strategic Dialogue have utilized some of this technology to detect antisemitic messages during Elon Musk’s acquisition of Twitter. However, automatically detecting antisemitism remains challenging for several reasons.
First, access to data is limited due to platform restrictions and the computational power needed to process such large amounts of data. Second, the datasets used to train the models are relatively small and do not encompass all variations of antisemitic manifestations in the rapidly changing online environment. Third, the classification is not always accurate, especially if annotators do not work with live data to understand messages in their “natural” context, including threads, images, links, videos, etc. Calling out hate speech or reporting stereotypes often leads to false positives. Identifying antisemitism is difficult for human annotators and even more so for machine learning programs at this stage. An illustrative test with ChatGPT can demonstrate this. While ChatGPT correctly identifies antisemitic stereotypes in the message “Fox News trashes Georges Soros while praising Joe Rogan using some antisemitic tropes – puppet master using his money to control the world. Then Pete Hegseth goes into a rant about the nonsense conspiracy theory Cultural Marxism. This is from Fox & Friends morning show,” ChatGPT misclassifies it as an antisemitic message itself.
We have been working on annotation projects to classify messages as antisemitic or not, and have been publishing the data to train machine learning programs. Similar projects are also underway, and the hope is to eventually have a large and diverse dataset that can detect antisemitic messages with a high probability, enabling the monitoring of developments, especially radicalization, within certain conversations. We have observed that conversations can quickly radicalize in the absence of opposition to antisemitic ideas. This is particularly evident on fringe platforms or subgroups, such as certain Telegram channels and message boards. When there is an established norm within an online community that deems Jews as evil, it only takes a small step for someone to physically attack Jews.
Antisemitism attracts antisemites, who feel emboldened by it. This cycle needs to be monitored, understood in its dynamics, and eventually disrupted.