It’s not social media. It’s not extremist ideas. What enables mass killers is much more mundane. And insidious.
Don’t blame social media. Instead, criminologists say the key to understanding mass killers is how they are socialised, reinforce their convictions – and are enabled to act.
The racist mass murder in a Buffalo, New York, grocery store on 14 May has the world wondering who will be next. Hate crimes seem out of control. And one rampant online conspiracy theory in particular – Great Replacement – appears to be behind it all.
Naturally, we want to know why.
The accused Buffalo killer’s 180-page attempt to excuse his act claims he came to his beliefs “mostly from the Internet”. He describes being inspired by the perpetrators of previous highly publicised mass killings in Norway and New Zealand.
And that, once again, has the world focussing on the self-learning profit-driven algorithms controlling what we see on social media. Is there something about their design that’s driving extremism? Are they fuelling hate for profit? Or are we missing the point?
“I would say that hate on the internet is absolutely a symptom and not a cause,” says criminologist Dr R. V. Gundur, who’s based at Flinders University in South Australia.
It’s not just that algorithms are spreading radical ideas, or that social media encourages a “feedback loop” of like-minded individuals, or that online communities facilitate finding the necessary resources and skills.
It’s that social media is a mirror amplifying what’s already there.
“If we look at any mass shooting, you have the radicalisation process – that’s one thing,” says Gundur. “But then there’s something that triggers action – the action that somebody is willing to take in order to put their beliefs in motion”.
That, he says, seems to be a combination of culture and environment.
On 14 May, the Buffalo shooter allegedly singled out and killed 10 people because of the “genocide of the European people” He live-streamed the act, attributing his radicalisation to the internet and seeking to promote his acts online.
It’s an increasingly familiar story.
The Australian man responsible for the 2019 New Zealand mass killings told investigators that online videos were a major part of his white supremacist life. That’s why he live-streamed his attack – it gave his message global reach.
Lowy Institute research fellow Lydia Khalil says the world is still coming to grips with the power of mass social-media platforms.
“People need to be able to understand how a choice to view, for example, an anti-vax video online can lead to being recommended an anti-Covid-lockdown account, and then possibly a video of Proud Boys fighting protesters, to then populating a person’s video feed with white supremacist content,” she says.
But Gundur says social media’s role remains communication. “People who do horrible things oftentimes want other people to know that they’re doing those things. And they’ll use whatever means available to do so.”
Mostly, people find like minds, says Daniel Byman, a professor in the School of Foreign Service at Georgetown University in Washington DC, US.
“They use jokes and sarcasm to lampoon the social-justice community, feminists and other supposed villains – and (these hate groups) have flourished,” he says. “They’re finding their group. They’re entrenching the ideologies that define their sense of identity. They’re convincing themselves of the urgency and severity of the perceived threats.”
Gundur says this is how society leads social media into a deadly feedback loop.
“The internet predicts things that you’re going to like, because that’s how people or organisations are able to leverage it in order to generate income,” he says. “But these market forces have changed the very nature of how we engage with the internet to feed us what we are likely to click on.”
Extremism is getting easier. Multinational networks are well established, and online platforms are at their core – sharing information, coordinating actions and inciting hatred. More and more people are getting radicalised.
Governments the world over, including Australia’s parliamentary Joint Committee on Intelligence and Security, have been seeking to understand the reasons. Are social media platforms “echo chambers of hate”, as the Australian Security Intelligence Organisation (ASIO) accuses? Do algorithmic mechanisms really profit from hate?
“Such questions can’t be fully answered, because there is a lack of transparency around how recommendation algorithms are designed,” says Khalil. “Algorithmic transparency is a critical step to tackle violent extremism comprehensively, along with other social harms and threats to democracy.”
Independent testing of Google, Facebook and Twitter repeatedly shows how “average” users are regularly led to increasingly extreme content. An internal Facebook report from 2016 concluded that “64 per cent of all extremist group joins are due to our recommendation tools”.
Social media corporations say they’re tweaking their algorithms and hiring moderators to prevent this from continuing. But clickbait farms are still exploiting the likes of Google’s AdSense and Facebook’s Instant Articles profit-driven marketing algorithms. And the self-learning nature of these algorithms may mean the companies themselves may be unaware of their “thinking”.
Syracuse University media analyst Dr Whitney Philips has told NeimanLab that social media is a catalyst. “I think of (the algorithms) like salt in some ways – that they intensify the flavour of food. So if you think about that in terms of sort of beliefs, and what people are bringing to their social platforms, algorithms can absolutely enhance those beliefs, both in negative ways and positive ways, by showing people more of what it is they want to see.”
“You have people who are just maggots on the internet, right?” says Gundur. “They say horrible things, but when you confront them in real life, they don’t have the stones to actually defend those views. The veneer of anonymity is gone.”
That’s why, he suggests, mass killings need more than just social media. “I don’t think that anger is driving these individuals. I think it would be unfair to make that assumption because I suspect that most of the shooters are operating in a pretty organised way.”
That, says Philips, is because they’ve built an internally consistent narrative.
“If you are surrounded on all sides by information that seems to confirm this particular belief that you have, and every time you search for something, you get information that confirms your beliefs, it would actually be illogical for you to say, ‘You know what, I reject this’.”
But, she says, the most significant influences are usually mundane and mainstream.
That’s where the Great Replacement conspiracy theory comes into play.
The theory germinated in the mind of early 20th-century French nationalist Maurice Barres, and sprouted again in French internet chat rooms in 2011 after a local critic revived the phrase.
Read also: How bias affects views of hate crimes
In the US, former US Republican congressman Steve King declared in 2017 that “we can’t restore our civilisation with somebody else’s babies”, while Fox News shock-jock Tucker Carlson last year accused the Democratic Party of wanting to “replace the current electorate with third-world voters”. Current Republican congressman Matt Gaetz has stated the US is experiencing “cultural genocide”.
Politicians, media pundits and community leaders know how and why such ideas get attention.
“Great Replacement is just a legacy of the white supremacy that has underwritten American society since its inception as a country,” says Gundur. “Let’s not mince words here – the US has always been an extremely prejudiced place.”
Now the fundamentally racist theory has gone mainstream in the age of algorithm-driven social media.
“At its core, it triggers [racists’] greatest fears – the irrelevance and destruction of the white race,” says Gundur.
Byman adds that social media serves to reinforce this.
“Many companies’ business models depend on keeping people glued to their screens so they can sell advertisements, and provocative content helps do this,” he says. “Now, the virtual and real worlds combine. On Facebook, groups might urge new recruits to meet with local talent spotters or organise real-world rallies designed to intimidate. In these in-person settings, they are then further radicalised. They’re convincing themselves of the urgency and severity of the perceived threats.”
Jamie Seidel is a freelance journalist based in Adelaide.