Skip to content
Transcript

Oren Segal
Antisemitism & Extremism Online: The Consequences of Digital Hate

Sunday 5.12.2021

Oren Segal - Antisemitism and Extremism Online: The Consequences of Digital Hate

- Oren, a very warm welcome, and thanks so much for joining us this evening. We are very fortunate to have Oren Segal with us this evening, who’ll be in conversation with Carly. As vice president of the Centre of Extremism, Oral C, Aura, sorry, Oren Segal and his team combat extremism, terrorism, and all forms of hate in the real world and online. Recognised as the foremost authority on extremism, the centre provides resources, expertise, and training, which enables law enforcement, public officials, and Internet and technology companies to identify and counter emerging threats. Oren joined ADL in 1998 after working for “The New York Times” and the Jewish Community Federation in San Francisco. In 2006, Oren was recognised by the FBI for his exceptional service in the public interest. He was named in the Forward’s list of 50 influential, intriguing, and inspiring American Jews in 2019. Oren, what an honour to be singled out and named as one of the top 50 influential people, so it is our, I’m so happy to have you with us tonight. It’s a great honour, and I’m going to now hand over to you and to Carly. Thanks so much.

  • Thank you, Wendy. I apologise to our listeners in advance. I’ve got a cold, so I hope I don’t sound too snuffly, but Oren’ll jump in if so, so Oren, I wanted to start with a kind of bird’s-eye view, so you’ve been in this space a long time. What do you see as the trends? And what’s the data telling us about where we are today in terms of online extremism?

  • Right, so first of all I just want to say thank you, Carly. I look forward to our conversation, and thank you, Wendy, for that warm introduction, so, you know, things are fine. Everything’s fine, and we can just, you know, end the Zoom call. No, I think that what we’re seeing, certainly in the past several years, is a rise in the use of social media, right? That’s not a surprise. Social media and technology platforms are relatively agnostic, right? They’re tools. The question is how are people using those tools? And in the past several years, we’ve also seen, you know, a rise in different types of extremist activity, including some of the most violent, and people, I think the best way to think about this is how do people create a worldview, right? How do they understand the world around them? Increasingly, they get their information from their phones. I mean, I don’t know if you’re like me. I sleep with my phone. I mean, I don’t know if that’s too much information, but it is, like, attached, not just to my hip, but it’s almost like it’s attached to my brain, and so the more that we are relying on those platforms to understand the world around us, the more that we might have access to disinformation and the type of actors that want to promote not just hatred, but encourage us, right?

People to act on these grievances, and so mass casualty shootings, not just in the U.S., but abroad, the amount of content and the platforms that are able to feature content that is geared toward hatred or misleading people. We’re just in a different type of communication environment, and so I think that’s sort of a foundation that we need to understand. We’re not getting our news from three channels. We’re not getting our news just from newspapers. In fact, most people are not even getting their news from any one established source, so once we understand that’s the environment, we can see where people try to prey upon that.

  • So the ADL does a lot of data-driven work. What does the data look like over the last few years? And what world events have caused a spike in online extremism?

  • Right so, you know, I will say it’s hard to measure everything online. I mean, there’s not transparency everywhere, but one way to understand the impact of the online space is to see what’s happening in the real world and see how that is connected, right? To those online spaces, and so, you know, we have seen, for example, more extremist propaganda that we track in the United States. We’ve seen it increase exponentially over the past four or five years. Now, you might think to yourself, “Well, what does that have to do "with the Internet or social media? "Like, having a banner or a flyer "that is anti-Semitic or hateful "posted on a synagogue "or some other religious institution "or a municipal building "or a campus, "like, what does that on-the-ground activity have to do "with what’s happening online?” And I will just say that the way that people share their ideologies, the place that people find the flyers, the banners, the stickers, right? To download and then act on is happening in an online space, so you can’t really separate it. The other element is that when we look at the most sort of horrific of incidents that we’ve seen, when you think of Pittsburgh or Poway, attacks against Jewish institutions, or in El Paso, attack against, you know, at a Walmart, but against, essentially, an entire community, or in New Zealand against a Muslim community.

It doesn’t just end with the attacks that occur, right? You have individuals who are then trying to leverage those in online spaces by sharing videos, right? By glorifying that violence, knowing that other people will be able to react to it, and so I say the big difference, it’s not quite the statistical question that you have, and I think we can touch on that a little bit more in a minute, but just, again, to underscore the role and how people should think about the online environment. When Robert Bowers, the Pittsburgh shooter, before he went in and killed 11 Jewish people, he signalled back to his online community, this platform called Gab, basically letting them know that he was going to do this. In Christchurch, New Zealand, we had somebody who was literally filming the attack and live streaming it so people could join him on that journey as it was happening, and so my point is that, you know, to be able to be part of a movement or a group, you don’t need to meet them in person anymore, right? The pandemic has certainly showed us that. All you have to do is be able to engage in that hatred in these online spaces.

  • So let’s look at one particular section of extremism, and that’s right-wing extremism, which, you know, has become more and more infamous, particularly in the U.S., in the last five years or so, and even just this weekend, you know, a lot of our audience will have seen the images from the Patriot Front, you know, marching through D.C., so let’s start with who are the Patriot Front? Do they, you know, do they have a big online presence as well as a physical presence? And is the concern about the rise of the far right online? Is that a fair concern?

  • Yeah, you know, let’s start with what we saw this weekend. For those who may have not seen it, we observed about 100 or so individuals marching in Washington D.C. under sort of the banner of the Patriot Front. This is a white supremacist organisation that has, you know, followers or adherents around the country, and they’re actually responsible for the majority of that propaganda I mentioned over the past three years, putting hateful flyers, sometimes under sort of patriotic-sounding narratives, in different places around the country, so they showed up yesterday. They are, you know, most well known for espousing anti-Semitism and racism under the sort of the guise of preserving the ethnic and cultural origins of their European ancestors, even though they’re here in the U.S., so basically, those are, you know, borderline neo-Nazis, but essentially, white supremacists, and this current version was sort of developed in the post-Charlottesville world where they splintered off another group that was involved in Charlottesville.

In fact, the group that the person who killed the young woman by running her over was involved with, so here’s a group with a history of violence who believe in the hateful ideologies that we know lead to violence, and about 100 of them decide they’re going to march in Washington D.C. Now, there was no violence at that moment, right? I believe everybody got home peacefully, but this is what this group does. We’ve seen this before in D.C., in Philadelphia, right? They want to show up, sometimes unannounced, and create fear and anxiety in the community. It is a sign that there is an underbelly that is ready to demonstrate that they’re willing to act at a moment’s notice. This was their way of saying, “We’re here. "Don’t forget us, and we’re ready, "basically, ready to rumble when you least expect it,” but I want to be clear, you know, we can’t take the activity that they do that is not violent, like a march, like putting a sticker or a flyer on an institution, we can’t take that lightly ‘cause those are the types of groups that I think people learn about those ideologies, and hey, because of their activity online because they promote themselves, they create experiences online where people are able to engage with them, and that’s how they reach, recruit, and radicalise people.

  • So you mentioned Charlottesville. We’ve got a global audience, and I’m sure a lot of them will have seen the headlines of Charlottesville at the time, but I think it would be helpful to break down Charlottesville and what it meant both, you know, to Jewish communities in America, but also, what it meant for the kind of monitoring of online extremism.

  • So yeah, you know, I think sometimes even a conversation like this, the title of this discussion sort of is very focused on sort of the online. I would actually just say there is no difference between online and offline. I mean, there is a, I guess, a practical difference, but in terms of the way messaging is sent, the impact, the reason we care about online activity is because of what it does to on-the-ground activity, so I just don’t separate them so much, but Charlottesville was an example of where people were able to organise around the country in various online platforms. Not just, you know, people think of the Facebook and, you know, and Twitter and YouTube, but sort of other alternative platforms at the time, and bring themselves together in one city in this country, and so the online organising was significant in terms of spreading the message to make people want to show up there in the first place, and we identified people from more than 36 states who ended up showing up in Charlottesville in August of 2017. Now, it’s most notoriously known for bringing sort of different white supremacist organisations who often generally didn’t agree with each other together in this one spot and then the violence that it ended with and, you know, the anti-Semitism that it was, that marked the occasion, right? One of the lasting sort of impressions that people have of Charlottesville is the chants of “The Jews will not replace us,” but the question is where did that come from? How did people from around the country find themselves coalescing around hatred of Jews and end up in violence? And I think the online discussion is part of that, right? I want to break this down a little bit more practically, and then I’ll stop. I’ll be quick.

If you’re sitting in your parents’ basement or, you know, in the kitchen or what have you or you’re an adult, like, we all want to be able to feel like we’re part of something bigger than ourselves, right? We want to have purpose in our lives, so some people do it through their work, right? Their family, their hobbies, but extremism and hate gives people purpose too. It’s sad to say, but it’s true, especially the types of extremism and hate that blame others for the realities that people find themselves in, and so what the online space enables you to do is feel like you’re part of a global movement. You may have not left your town, right? But yet, you are connected to people who share your ideas, who are like-minded to you. It’s really powerful. It’s powerful if you have an interest in bowling and you find, you know, people around the world who love bowling, but it has that same impact if it’s also people whose shared interest is hatred of Jews or interest in violence, and that’s where those online spaces have helped connect people, based on their hateful ideas, in ways that we just haven’t seen in human history, and the results have been Charlottesville, Christchurch, Pittsburgh, January 6th.

  • And for yourself or organisations like yourselves, how easy is it to penetrate this online world? And what kind of regulations or steps have social media companies taken to try and tackle this?

  • Yeah, so for those who may sort of not know, maybe this was important to say, like, up top, as the vice president of the ADL Centre on Extremism, I oversee a team that spends most of their waking hours in these online spaces that we’re describing to try to identify threats, right? To try to track people who pose security threat, understand the way that they speak to each other, their language, their code words, and so to do that, you have to be in multiple different places. Now, a lot of it’s public, but quite a bit of it is not, you know, easy to necessarily find, and I don’t recommend anybody on this call trying to, right? So we have people whose job it is to try to understand where the next campaign is, where the next threats are in various online platforms, whether it’s public or private, and so, you know, that’s the we do our job, and then we leverage that. If we see a threat, we will share that information with law enforcement. We share over 1,000 what we call law enforcement assists on people of interest or events that are happening with law enforcement every day because here’s the deal: I don’t need to get a court order or permission to open up an investigation, right?

We’re a journalistic organisation, and we can, you know, do the type of research in extremism to help protect the community that is a little bit different than law enforcement, so when we see a threat, we will share that with them, but we also, to your second part of your question, Carly, is share information with the tech industry because, again, like, nobody knows how extremists speak, the code words, the symbols, et cetera, better than we do, and we also have an expectation that those companies, you know, create platforms that are safe for their users that don’t allow themselves to be exploited by extremism and hate, and we feel like we can help them. The question is do they want to be helped? And that’s not always the case, so I will say some of these organisations or companies or institutions, you know, have leveraged our information to train their algorithms to identify hate and try to, you know, prevent it from being on their platforms, but others simply have not done enough, and so that’s one of the ways we’re engaging with these companies based on what we’re finding. It’s not always to say, “Here’s some extremists that you need to know about. "Please take them off.” I mean, that’s not useful. I don’t want to give them fish. I want to teach those companies how to fish, and frankly, we’re willing to talk to anybody if they’re willing to take that action.

  • But there are some companies who’ve tried and who, perhaps, you know, when you start to intervene in those spaces, you can have kind of unintended consequences, so whether, for example, it’s Twitter new online rules in terms of, you know, how much you can name people online, or whether it’s Facebook’s banning of Holocaust denial a year ago, you know, did either of those two policies help? Or actually, have they had the consequences of making it more difficult for yourselves and others?

  • So first of all, I would just say, like, my job and the job of my analyst investigators should be hard. It should not be easy to find hate and extremism, certainly not as easy as it is, so if the ultimate result of our work is making our jobs more difficult, bring it on. Fantastic. That means we’re succeeding. To just give the listeners a little context on the two items that you mentioned, in October of last year, Facebook finally recognised Holocaust denial as violating its terms of services, right? As a form of hate speech, in a sense, meaning for all the years that we were complaining about Holocaust denial being on the platform, they didn’t violate their terms of services, so they didn’t need to do anything about it, so the good news is they finally decided, you know, that they would would would flag that type of content as problematic and as being violating their terms, but we did a study just last week we published this that looked at some of that content that was online before they recognised that this was violating their services and found additional content that was Holocaust denial-related and essentially said, “You’re not doing good enough,” right? If my analysts and investigators, as fantastic as they are, can find it, a multi-billion-dollar company can spend more money making it harder for that content to be found, and by the way, they themselves say they don’t want it now on their platform, so what they need to do is resource to make sure that that content doesn’t find a home there, right? If they’re serious about that.

Now, I just want to anticipate a couple of questions that might come up about, you know, the idea of freedom of speech, and that shouldn’t we be allowed to, you know, believe in whatever we want? And the answer is yes, especially in America, you can be kind of as hateful as you want to be, but let’s not forget those are private companies that can make decisions about what they will allow and not allow, and they don’t have to, you know, abide by the, you know, free speech laws that we have ‘cause they’re private, so that was the case with Facebook, and I would say they still have a long way to go in terms of doing a better job of removing that specific content, and by the way, a whole bunch of other hate and extremist content as well. The second item that you mentioned is Twitter recently put in a new basically rules that, it’s not totally clear, which is one of the problems, that would basically say, “If you provide,” well, I don’t want to speak for them because I don’t know exactly what the rule is. The concern about the rule is that what it will result in is the removal of photos or information about extremists, meaning in their effort to ostensibly try to stop people from being harassed or doxxed on their platform. In other words, it’s reasonable, and, in fact, we ought to applaud a company that will take down content that provides the home address or personal information of somebody, no matter whether we like that person or not. That is an absolute positive, but in doing that, does that, then, prevent the exposure, exposing bad actors? And so it’s just not clear exactly what their rule sort of was intended for, and we have gotten reports that people who were exposing bad actors without, you know, putting their family’s address online were having that content eliminated, so I understand, actually, yesterday there was an article that said Twitter is rethinking their approach to this latest set of rules.

  • And obviously, one of the challenges on social media is the anonymity and the nature of people being able to hide behind, you know, various technology tools and switch platforms. How much do you think the problem of anonymity and how people are allowed to, you know, be keyboard warriors from a distance is causing this? Or do you think this is just the nature of social media, and therefore, the various policies and rules have to adapt accordingly?

  • So anonymity is interesting. I do think that when somebody has to, is not accountable for what they say and do, when they can hide behind a mask or not have their identity revealed, they may act a little more inappropriately, right? I mean, I think that’s human nature. I think this is, we’ve seen this sort of play out, but, you know, is the anonymity, is removing anonymity sort of a, going to solve the problems of the Internet and the way it’s exploited for hate and extremism? I think not. I also think we need to think about how anonymity can sometimes be a tool that enables people to speak out against real important issues, right? So you can imagine people in a country, or maybe make it more sort of specific, in a school. If there are individuals who feel like they’re being targeted because of, you know, their religion, their race, who they love, and they don’t know how to, you know, create change, then being able to anonymously let people know what’s happening in a community, in a school, in a country can be very important, right? 'Cause otherwise, people would feel afraid to put that information out, so my point is it sort of goes both ways there, and I don’t think there’s any one solution to the hate and extremism that we’re seeing online. I don’t think anonymity alone, getting rid of that, will solve the issue, but I do think accountability is important, right? And the question is how do you have a certain level of accountability in these online spaces? And I will just say this: Maybe we just start with the easy stuff. Don’t let somebody threaten to kill somebody else in those online spaces. Don’t glorify, don’t allow the glorification of mass murderers, right? Companies should act on the content and the messaging, and I think that that’s probably a good start.

  • Just that small step, so we’ve obviously talked quite a lot about right-wing extremism, but that’s not the only extremism you find online, and, you know, during the last Gaza conflict, hard to measure, I know, but ADL, at the time, saw a 75% increase in anti-Semitism during that conflict, so is it all forms of extremism, you know, that we are seeing on the rise? And, you know, is there a kind of strange bedfellows going on between some of those extremist groups?

  • So let me unpack the question a little bit. I will say that anti-Semitism is not the sole domain of any one extremist movement, no matter, you know, it’s not the sole domain of any political movement, religious movement. Anti-Semitism is unique in the sense that it can be exploited and animated by pretty much everybody across the ideological spectrum, and often is, so it’s not unique to any one group or individual. When it comes to extremism, though, I mean, this is a little bit different, right? Like we have data that shows in the United States over the last 10 years, you know, extremist-related murders in this country, 75% have been carried out by right-wing extremists, but, you know, so that’s a significant data point. However, we know that anti-Semitism, you know, particularly in response to the conflict between Israel and Gaza, demonstrated that, you know, there’s an atmosphere that can be created, especially online, that can also lead to violence that has nothing to do with white supremacy, right? So, you know, we did a study of some of the anti-Semitic tropes that were emerging in online spaces during the Hamas-Israeli conflict and found, for example, that on Twitter, you know, there were 17,000, in the days just after the conflict started, 17,000 variations of the word Hitler was right, right? So this is a direct sort of campaign that is starting right after the Gaza conflict, and by the way, only a small sampling 'cause this is only in English and only on Twitter, so you can imagine how big it was, even sort of when you look at the whole ecosystem.

On 4chan, which is like a cesspool of extremism and hate, there was a over, you know, 340% increase in anti-Semitic commentary and posts related to the Israeli-Palestinian conflict, and then, ultimately, what did we see, right? This is my sort of point from earlier, like, numbers about the prevalence of this online are interesting, but what we really care about is how does this impact what’s happening on the ground? And we did see an impact, right? 115% increase in anti-Semitic incidents in the United States during the time of that conflict in comparison to the previous year. 115% increase. Many of those, including the types of narratives and themes that we were seeing play out online, and this is the problem. This is what made it particularly scary this time around, right? Anti-Semitism rising as a result of a conflict abroad is not new, but when we saw people getting beat up in Times Square in New York, not far from where I am, or people in Los Angeles getting attacked or in Las Vegas, it’s the violent anti-Semitism that I think the online environments normalise, again, no matter where you are in the political or ideological spectrum, that’s what we all have to fight.

  • And how has Covid affected, I guess, particularly, the online world? But also, then, manifesting in the streets? You know, if you look at the anti-vaxxer movement or the conspiracy theories around, you know, Covid, where it came from, et cetera. How much has this played out into generally boosting extremism? And then, what has that meant once the extremism takes to the streets?

  • So, you know, shortly after the pandemic, well, even when, before the lockdowns, when word of sort of coronavirus started spreading, not surprisingly, immediately, we saw, in online spaces, efforts to put the anti-Semitic spin on that. This is sort of classically what happens online. Frankly, this is classically what happens historically during pandemics where Jews are blamed for it, as we saw during the black plague, but this was sort of a modern version of this, so certainly, after the lockdowns began around the world, we started seeing online commentary from a range of different places, from fringe actors, sometimes from, you know, occasional state actors, other times influencers about how the virus was a tool for Jews to expand their global influence, right? That sounds like the classic hatred of Jews, that Jews were profiting, right? Financially, somehow, from the virus, another classic anti-Semitic narrative that has, you know, led to real violence against Jews over thousands of years. It was just, you know, online in a modern way. Same story, new technology, if you will. You know, some of the conspiracy theories were talking about how, you know, the state of Israel, in particular, was trying to benefit, and then you had a smaller subsection of, I would say, again, more fringe extremists telling people, this is in the beginning.

This is like March, April of 2020, May, go find a Jewish person in your community and cough on them. It was actually being weaponized, saying, like, this is the you kill people. This is when people, is you literally go cough on them, so the fact that there was, like, massive amounts of this content online, by the way, that’s just against the Jewish community. We were tracking a whole bunch of anti-Asian and, in particular, anti-Chinese sentiment as well. You know, at a time where people are afraid, right? When fear and anxiety is at its height for this type of anti-Semitism to come up is almost unexpected because when there’s fear and anxiety, that’s the environment in which enables anti-Semitism to flourish, and we started seeing how this was playing out from discussions online into targeting the Jewish community. I’d say most, I guess, certainly in 2020, through this Zoom bombing. I mean, look at the platform that we are all on now. I’m looking down here. 686 people are listening to you and I talk, and, you know, a lot of people are used to Zoom now. Lot of people are really sick of Zoom, right? Not Zoom as the company, but, I mean, wow, a lot of meetings right back to back to back. It’s a Sunday, and you are all on Zoom. I actually have a question about that, but we can move on.

The point is when people needed to find technology to connect with their family, with their neighbours, with their loved ones more than ever before, we saw how anti-Semites were able to exploit that new technology and to harass people, so we saw funerals being Zoom bombed, meaning, you know, extremists or opportunists were getting into those online spaces and putting hateful content or pornography or images of Hitler, et cetera, right? To sort of disturb and disrupt a wide range of online, you know, meetings like this 'cause anti-Semites never miss an opportunity to exploit a crisis, so that was one of the sort of, I guess, side effects from the pandemic was sort of how normalisation of those concepts that Jews are responsible for the coronavirus, which is a classic hate, but then, finding a modern way to get that hatred to impact those in the Jewish community through Zoom bombing, through other sort of messaging that we saw. Anti-Semitism is always the same. It’s the way that it is spread that changes.

  • So talking about new technology, as you would’ve seen just before the weekend, Donald Trump announced, you know, that he’d raised a decent amount of money towards his Truth Social platform, and, you know, some of his supporters have been, you know, accused of having strong links to alt-right. What do you see as the likely concerns around this new platform? And, you know, with each new platform, does it become harder for you guys to do your job, despite the fact that you think it should be hard? Does it become harder? Or is this just a, you know, learn a new tool and off we go?

  • Yeah, so if I understand the question, I mean, so I understand Trump has, continues to find ways to communicate. I’m not sure the sort of extent of usage of this platform that he might be creating, but honestly, like, this is not about Trump and his platform, right? You don’t need a new platform to understand how problematic online spaces are if they’re not moderated, if they don’t have terms of services, right? If they don’t care for their users and communities, so, like, we already have a whole bunch of platforms that we’re concerned about, and here’s the deal. Like I always say, I learn about new technology from the extremists and the haters who are early adopters and from my eight-year-old son. They are the ones who are going to let me know where we need to spend that time. They’re the early adopters, and, in fact, the investigators at ADL Centre on Extremism that are tracking this hatred and the threats every single day are in the same spaces, the same names of the platforms that my kids are and other people’s kids and grandkids are. I mean, that should tell you all you need to know about extremism today is that it’s literally existing in the space that our kids are spending most of their time, so whether or not you’ve ever been on TikTok, there are those, we know 'cause we’re tracking this, extremists that are trying to exploit it, trying to get their hateful message, again, across the ideological spectrum, to reach children at an early age who may not understand, don’t have the critical thinking skills. I mean, that should be a concern.

Frankly, I’m more concerned about that than Trump’s platform, you know, like, and the other thing is this is not a political thing, right? Like, we should all, whether we’re on the right or on the left, care that our kids are getting accurate information and are not being exploited for hate, right? I think the left and right agree on that. I think combating hatred, there’s a lot more that people would agree on. It’s just we’re not always agreeing on how to get there, and last point I would say about this, we tend to, I think this is, again, I don’t know if this is human nature. I don’t think this is a particularly uniquely American phenomenon, but it seems like we’re pointing across the aisle and blaming others for being the real anti-Semites, right? The right says, “Look at the left.” The left says, “Look at the right.” What we ought to do is just point at ourselves and say, “Who can we, who do we have trust with,” right? “Who do we have credibility with?” And if they normalise anti-Semitism, then we can reach them in a way that we probably couldn’t reach the other side 'cause we’re not seeing eye to eye, so it’s really about us making sure that we’re brave enough to speak in those circles that we’re in, and I think that that will help make a change.

  • So the Internet is obviously a global phenomenon, and, you know, doesn’t have borders in the same way that countries do. How much of what you’re seeing is a global phenomenon? And how much is U.S. based?

  • Well, it’s all global, right? I mean, most of those platforms, you can, people are engaging with it from around the world. I mean, that’s one of the great promises of so much of this technology. It also, you know, is one of the ways it can be abused. I’ll tell you there was this group that was involved in murders, and, I mean, very much an anti-Semitic organisation, threats and plots, and a lot of the organising it would do is online in those online spaces, and it turns out that one of the folks who was sort of administering this online space in which Americans were getting radicalised and acting out was a 13-year-old Estonian, a 13-year-old, right? So sort of knowing how to sort of manipulate and leverage this online space. That shows, you know, the power and the global nature. You know, we’ve also seen how certain ideas or tropes that are sort of classically American hate, if you will, right? We’ve had sort of our anti-government extremists and our Klan and our white supremacy. We’ve seen how that has shown up in different parts of how world, whether through symbols or clothing and animated extremists abroad. We’ve seen how the ideas from abroad have impacted extremists in the U.S. This has been facilitated and really exacerbated by online spaces, so a lot of people have recently talked about the great replacement theory. This was created, you know, by a French author and is now sort of as ingrained in sort of American extremist culture, if you will, so, you know, like, global finance, like, you know, so many other issues that changed during globalisation, the online spaces have made it so, you know, everything can be both local and global at the same time, and that includes hate and extremism.

  • So you touched before about how much information you pass on to the FBI and the authorities. How have the authorities adapted to this new space? And what do you think the U.S. government could be doing more proactively? Oren, you’re on mute.

  • Oh, I was on mute. Yes. Thank you, so this is a little tricky because, I mean, we’ve been dealing with this for 30 years. I mean, law enforcement has been dealing with this for 30 years. The tools keep changing. They keep getting more advanced. You know, encrypted technology, social media, and the question we have to ask ourselves is how much do we want our government and law enforcement to be in private spaces to monitor the activities and content that we know might animate extremists but that is completely protected, right? So I know speaking with law enforcement quite a bit, they’re concerned about violent activity, criminal activity, that that’s how they need to focus, and that’s, frankly, as sort of a citizenry, that’s what we want them too. We don’t want them to be able to, you know, research anybody just because. The reality, though, is that people are not just, you know, people are motivated to violence and criminal activity because of ideas, so this becomes a battle, in part, not just about enforcement, about what’s happening online, not just about law enforcement being able to do the job that they need to in terms of getting ahead of threats.

All super-important, but this is a battle of hearts and minds, and this is why we need a holistic approach to dealing with those issues. You know, but sure, government can do a little bit more as well, right? We all need to do a little bit more, and so, you know, one of the things that ADL’s put out is what we call our repair plan, which has a set of recommendations that we believe will help push hate and extremism more toward the fringes in those digital spaces, and an example is of what we’re sort of calling for is that government should protect consumers by holding platforms accountable for adopting and enforcing policies to identify and combat hate and harassment across all the platforms, right? So enforcement at scale. It doesn’t work if it’s, like, one part of a platform, a platform on its own. This is an ecosystem. People are migrating from one platform to another, and we need to view this a little bit holistically, more holistically. You know, the basic concept of putting people over profit, right?

This was what motivated the Stop Hate for Profit campaign that ADL, along with partners from NAACP and Common Cause and others, basically, you know, focused on Facebook to say, “You need to do a better job "by making sure that your algorithms "are not recommending bad content "to people who otherwise wouldn’t have seen it, "that you need to do a better job "of removing violent harassing "and content that violates your terms of service, "that we’re not the crazy ones.” It’s not unreasonable. I feel like this is, at the end of the day, if I’m simplifying it, it’s not unreasonable for users, right? People who are spending their time and money on platforms to expect them, those companies, to do more about protecting them from seeing hate and from sort of recommending it. It’s not even just that they allow it on their platform, but that they recommend it for people to see, so I’m happy. I’ll put, oh, well, I don’t think I have access to the chat, but I’m happy to sort of provide afterwards our repair plan, which has six other, you know, has a total of six examples of ways that we think government can do more to help force companies to do better 'cause they’ve been a little hands-off, and maybe it’s time they have to be a little more hands-on.

  • And social media companies have come under a bad rap for a lot of reasons in the last years, and what you’ve just touched on is one of them. The other element is this kind of fake news phenomenon, which, you know, has come up time and time again, that, you know, companies are looking to take steps now to moderate, whether it’s, you know, you’re on Twitter and you see a kind of alert under something or, you know, the kind of score as to whether or not something has passed fact check. You know, do you see those as helpful tools in fighting extremism? Or are they all a little too, you know, shallow and on the surface?

  • I mean, I will say that facts are indeed important, and the way that people manipulate facts is also important in the fight against not only anti-Semitism, but all forms of bigotry and extremism, right? So we know that disinformation has had a significant role in motivating people to action. I mean, we could just take January 6th as an example. You know, whether it was the ideas of a stolen election or that, you know, antifa is coming to your neighbourhood, or just a whole host of sort of those outlandish narratives that were being promoted online, reaching people in ways that sort of was a drumbeat of disinformation. We know what the results of that can be, and I always tell people that, you know, obviously, our work is to track extremists and bigots and et cetera, and on January 6th, you know, we saw Oath Keepers and Proud Boys and white supremacists and anti-Semites, you know, QAnon conspiracy theorists show up there, but they were about 23% of those who were ultimately arrested for, you know, going into the Capitol. What that means is that 77%, you know, two out of three, the vast majority of the people who were involved in that insurrection in that extremism, in that extremist act, I would say, are not card-carrying members of any extremist group or movement. These are people who were motivated to show up because they believed their country was being taken away from them. They believed in a stolen election. Some of them, not all, believe that the vaccines are going to put microchips in your body, that the coronavirus is fake.

We have seen how violence has been animated by lies, and frankly, as sort of a Jewish community, right? For ADL, seeing how lies animate people to violence is not new for the, I mean, this is 2,000 years old, so it doesn’t always have to be focused on anti-Semitism to motivate people to violence, and that’s what we saw, in part, on January 6th, so this is why it’s important to speak out against lies that undermine democratic institutions, period, because the more that our democratic institutions are undermined by lies, the more normal it is to hate Jews, to hate women, to hate Muslims, to hate anybody because our norms get broken down, and then I’ll just say this is why the mission of ADL is not just to stop the defamation of the Jewish people, but to secure justice for all. We recognise when other forms of hatred are normalised, that Jews are then targeted, and to fight anti-Semitism, you have to fight all forms of hate. I would just say in this modern world, fighting disinformation, I think, is an effort to protect marginalised communities, and we need to realise that now.

  • And the statistics over the last few years, you’ve obviously touched on anti-Asian hate. Are you seeing, you know, the mainstream media, those of us who focus would suggest that there’s a general polarisation and general increase in hostility to minorities across the board? Are you seeing the same levels of increase against all minorities as you are in the rise of anti-Semitism?

  • So, you know, measuring that, again, it’s difficult to measure that in online space. Doesn’t really exist. We, you know, we try. We certainly look at anti-Semitism online We work with others to try to measure other forms of hate. I mean, the way that you can see that is through FBI hate crime statistics, and anti-Jewish hate crimes continue, sort of, to lead the way in terms of religious-based hate crimes in this country, right? There was over 7,000 in 2020, the last time the FBI released its numbers, which it updated, but I will tell you that even those numbers are incomplete, right? That there’s 60-plus cities of 100,000 people or more that just don’t report hate crimes, and it’s not 'cause they’re the safest places in the planet, right? It’s just we have a hate crime reporting problem, which we’re, you know, working on, but, you know, the number of hate crimes targeting the Jewish community, considering the population, is, what, two-point-something percent? It’s always been much bigger than the population and sort of speaks to the staying power of anti-Semitism, but it’s not just the Jewish community. Yeah, I mean, to answer your question and the hate that we’re seeing online is, you know, we know when we see more anti-Semitism, we’re also seeing more other forms of hate.

  • So what concerns you in the future about the online spaces, and what do you think could be done now to try and mitigate some of those things?

  • I guess the concern is that innovation is going to continue. There’s going to be new technologies. There’s going to be more, it’s going to become even a greater part of our lives, and if we don’t, you know, make sure that there’s ways of dealing with hate and extremism now, when it becomes even more part of our lives, if you can imagine that, it’s going to be too late, so my concern is that we don’t do enough now. Technology always seems to advance faster than our morality, but we need to, like, take a step back and make sure that we’re doing as much as we can so that the lessons are going to be learned in the future, and this is why we’ve been calling out, you know, these companies to say, “You need to invest more of your time and money "into making sure that your platforms don’t normalise hate, "that you need to do a better job of recognising it, "that you need to enforce your terms of services, "that you need to have more transparency, "that people don’t even know what you’re doing, "and that we can’t just take you at your word "because we’re sick of seeing people "who end up killing others "because they were able to find a community "on your online space,” right?

Like, we need to say, “No, that’s not acceptable. "That’s not enough. "You need to do more,” and you’ll hear, “Well, this is hard.” Well, like I said, yeah, it is hard. We want it to be hard, but there’s some basic things that people have not done that those companies can do more of, so I’ll just be like we’re never going to be totally satisfied, right? None of us will be. This is something that’s going to be with us for a long time, but if we can’t even just call Holocaust denial a form of anti-Semitism, that only in 2020 that decision is made by Facebook, that means we’re not moving fast enough to recognise the impact of hate and extremism in these online spaces. I just want to say one more thing. The thing I actually am most concerned about is that the more anti-Semitism and other forms of hate become just so ubiquitous and so normal that they impact our public discussion, that they impact what we’re seeing on the ground, people won’t even recognise anti-Semitism and hate anymore, right? It’s not a question of combating it. I mean, we have a question of this is getting so bad people don’t even recognise it anymore because it’s become so normal, so yeah, I think we have our hands full, but, you know, I just want to be sure. I think we’re going to win, right? Like, you can’t do this work without having some optimism.

  • That’s what I was going to say. What are you optimistic about?. You know, who have you really seen step up in the last few years? And what gives you hope that we are making progress?

  • Yeah, I mean, listen, the conversations are being had. You know, people are still outraged when, you know, companies are not living up to their expectations or the expectations of their users, right? It’s not a completely lost cause. We have seen some example of some companies, you know, smaller, that have, you know, whether it’s dating apps or other companies that have, you know, taken steps to make their specific platform a little bit more safe, right? It’s hard to see some of what’s happening in our sort of political world and feel good that, you know, extremism and hate is being pushed back on as much as we would like, but, you know, I really do believe this moment in time is not going to be just remembered for the hate and the extremism that we see every day. It will be remembered for what good people did to say no, to push back, so I am, I mean, listen, there’s 600-and-something people on this call on a Sunday. Like, people want to find ways to push back against hate and extremism. I feel that every single day. That’s inspiring, right? People have not completely given up. We just need to give people the tools so that they can feel like they’re part of that. Maybe we could talk about what that is.

  • Yeah, so we’ve been very lucky that ADL’s been a big part of the Shine a Light Week and, you know, done a lot, particularly both in the education space and the corporate space, to really engage with this. What would you encourage our listeners to do, you know, who are not, hopefully, haunting these spaces? But what can they do in their worlds to challenge some of this or to look for ways to engage?

  • Yeah, I mean, I think number one is be an ally, right? I think when we see other communities that are being targeted or impacted, there’s real strength in showing up for that other community, and not because you want them to show up for you later on, when, inevitably, you might be targeted, but I think, again, it’s really important to add humanity when things can be a little bit inhuman by showing up for other communities, for other people, even in your own life, and say “I’m there for you. "This is not acceptable.” I think using the bully pulpit, right? To the degree that people have a voice or know people that have a voice, encouraging them to speak out against hate and extremism every time it shows up, whenever or wherever it arises, is critical. We should never be too tired to say, “All right, I’m going to skip this one,” right? The stakes are too high, so if somebody doesn’t feel comfortable themselves doing that because of their circumstances, right? It’s finding others and letting them know, and I’m going to say one other thing, and this might sound small, but reporting what you see, right? We know that data drives policy. We know the way that we’re going to resource to those threats in government, through civil society organisations, through others, is by understanding what the data gives us, and the only way to have real data is for people to report what they’re seeing, so if you see something online, don’t just report it to the company. You should do that, but let an ADL know as well, right? If you’re seeing something happen in your community on the ground, report that. Report an anti-Semitic incident. The more data that we have, the more we can change policy and win hearts and minds, so speak up, be an ally, and report. Those three things, I think, are totally doable on a day-to-day basis.

  • Thank you, Oren, and this was one of 22 events happening today as part of the Shine a Light Coalition, and Oren, you and your team have been big advocates and allies of this, and for me, this has been a real opportunity. You know, the work that ADL does, particularly in this online space, you know, in our work at the Courage Foundation and our support for the Community Security Service and CSI, we see every day, you know, the meaningful difference this makes and the very important tools this provides, so I would encourage everyone to take a look at the initiative that Oren’s mentioned, and also, you can find a lot of links on shinealighton.com, and thank you all very much. I’m going to hand back over to Wendy.

  • Thank you very much, Carly. Thank you, Oren, for an outstanding presentation. Really, so much to think about. I would very much like to expand on your suggestion that we talk about additional tools, if maybe we could share with our participants. You know, I’m also thinking that at this moment in time, when the general narrative is being scrutinised so much by the left, this is also an opportunity for us, and we have taken it thanks to all the organisations that have joined together to shine the light on anti-Semitism. It’s a real opportunity to continue to the discussion and to focus on the language that we’re using. You know, you’re talking about belief systems, Oren, primary belief systems, and it’s so important that those people are challenged, especially the young ones because they’re creating those belief systems when they’re very young. It’s very difficult to change them, as you know, so are you there? Have we got Oren there?

  • I’m still here. I hear you. Yes.

  • Yeah, yeah, so, you know, I think, you know, just think about education and how one can, you know, work with other organisations around education as well. Yeah, you know, it’s, you know, I think it’s a holistic approach.

  • I agree with you 100%, Wendy. I think, you know, it’s important to speak out. It’s important to provide intelligence for those who can, you know, ward off, hopefully, the most dangerous immediate threats, but at the end of the day, it’s about winning hearts and minds, and education and critical thinking skills are key, and the lessons from the, like, we have to go back. One of the key elements of ADL’s education programme is almost lessons from the Holocaust. You don’t become a bystander. Terrible things happen when you allow them to happen and don’t speak out, and so we have an opportunity to teach kids at a young age be critical thinking about what type of messages you’re getting online, how to identify stereotypes and hate, and how not to be a bystander. I think those three things will help us.

  • Absolutely, well, once again, thank you very, very much for the outstanding presentation, extremely informative, and thank you, Carly, as always, and to all of you for joining us tonight, thank you very much. Thank you, Lauren, and on that note, I’ll say goodnight.