My Blog
Business

Online disinformation sparked a wave of far-right violence in the UK

Online disinformation sparked a wave of far-right violence in the UK
Online disinformation sparked a wave of far-right violence in the UK


Riot police officers push back anti-migration protesters outside the Holiday Inn Express Hotel which is housing asylum seekers on August 4, 2024 in Rotherham, United Kingdom.

Christopher Furlong | Getty Images News | Getty Images

It didn’t take long for false claims to appear on social media after three young girls were killed in the British town of Southport in July.

Within hours, false information — about the attacker’s name, religion, and migration status — gained significant traction, sparking a wave of disinformation that fueled days of violent riots across the U.K.

“Referencing a post on LinkedIn, a post on X falsely named the perpetrator as ‘Ali al-Shakati,’ rumored to be a migrant of Muslim faith. By 3 p.m. the following day, the false name had over 30,000 mentions on X alone,” Hannah Rose, a hate and extremism analyst at the Institute for Strategic Dialogue (ISD), told CNBC via email.

Other false information shared on social media claimed the attacker was on an intelligence services watchlist, that he came to the UK on a small boat in 2023, and was known to local mental health services, according to ISD’s analysis.

Police debunked the claims the day after they first emerged, saying the suspect was born in Britain, but the narrative had already gained traction.

Disinformation fueled biases and prejudice

This kind of false information is closely aligned with a rhetoric that has fueled the anti-migration movement in the U.K. in recent years, said Joe Ondrak, research and tech lead for the U.K. at tech company Logically, which is developing artificial intelligence tools to fight misinformation.

“It’s catnip to them really, you know. It’s really the exact right thing to say to provoke a much angrier reaction than there likely would have been were the disinformation not circulated,” he told CNBC via video call.

Riot police officers push back anti-migration protesters outside on Aug. 4, 2024 in Rotherham, U.K.

Christopher Furlong | Getty Images

Far-right groups soon began organizing anti-migrant and anti-Islam protests, including a demonstration at the planned vigil for the girls who had been killed. This escalated into days of riots in the U.K. that saw attacks on mosques, immigration centers and hotels that house asylum seekers.

The disinformation circulated online tapped into pre-existing biases and prejudice, Ondrak explained, adding that incorrect reports often thrive at times of heightened emotions.

“It’s not a case of this false claim goes out and then, you know, it’s believed by everyone,” he said. The reports instead act as “a way to rationalize and reinforce pre-existing prejudice and bias and speculation before any sort of established truth could get out there.”

“It didn’t matter whether it was true or not,” he added.  

Many of the right-wing protestors claim the high number of migrants in the U.K. fuels crime and violence. Migrants rights groups deny these claims.

The spread of disinformation online

Social media provided a crucial way for the disinformation to be circulated, both through amplification of algorithms and because large accounts shared it, according to ISD’s Rose.

Accounts with hundreds of thousands of followers, and the paid-for blue ticks on X, shared the false information which was then pushed by the platform’s algorithms to other users, she explained.

“For example when you searched ‘Southport’ on TikTok, in the ‘Others Searched For’ section, which recommends similar content, the false name of the attacker was promoted by the platform itself, including 8 hours after the police confirmed that this information was incorrect,” Rose said.

Shop fronts are being boarded up to protect them from damage before the rally against the far-right and racism.

Thabo Jaiyesimi | Sopa Images | Lightrocket | Getty Images

ISD’s analysis showed that algorithms worked in a similar way on other platforms such as X, where the incorrect name of the attacker was featured as a trending topic.

As the riots continued, X-owner Elon Musk weighed in, making controversial comments about the violent demonstrations on his platform. His statements prompted pushback from the U.K. government, with the country’s courts minister calling on Musk to “behave responsibly.”

TikTok and X did not immediately respond to CNBC’s request for comment.

The false claims also made their way onto Telegram, a platform which Ondrak said plays a role in consolidating narratives and exposing increasing numbers of people to “more hardline beliefs.”

“It was a case of all of these claims getting funneled through to what we call the post-Covid milieu of Telegram,” Ondrak added. This includes channels that were initially anti-vaxx but were co-opted by far-right figures promoting anti-migrant topics, he explained.

In response to a request for comment by CNBC, Telegram denied that it was helping spread misinformation. It said its moderators were monitoring the situation and removing channels and posts calling for violence, which are not permitted under its terms of service.

At least some of the accounts calling for participation in the protest could be traced back to the extreme right-wing, according to analysis by Logically, including some linked to the banned right-wing extremist group National Action, which was named a terrorist organization in 2016 under the U.K.’s Terrorism Act.

Ondrak also noted that many groups that had previously circulated false information about the attack had started walking it back, saying it was a hoax.

On Wednesday, thousands of anti-racism protestors rallied in cities and towns across the U.K., far out-numbering recent anti-immigrant protests.

Content moderation?

The U.K. has an Online Safety Act meant to fight hate speech, but it only comes into effect early next year and may not be enough to guard against some forms of disinformation.

On Wednesday, the U.K. media regulator Ofcom issued a letter to social media platforms saying they shouldn’t wait for the new law to come into force. The U.K. government also said social media companies should do more.

Many platforms already have terms and conditions and community guidelines, which to varying extents cover harmful content and enforce action against it.

A protester holds a placard reading “Racists not welcome here” during a counter demonstration against an anti-immigration protest called by far-right activists in the Walthamstow suburb of London on August 7, 2024.

Benjamin Cremel | Afp | Getty Images

The companies “have a responsibility to ensure that hatred and violence are not promoted on their platform,” ISD’s Rose said, but added that they need to do more to implement their rules.

She noted that ISD had found a range of content on a number of platforms that would likely be against their terms of service, but remained online.

Riot police officers push back anti-migration protesters outside on Aug. 4, 2024 in Rotherham, U.K.

As disinformation spreads during UK riots, regulators are currently powerless to take action

Logically’s Henry Parker, who is VP of corporate affairs, also pointed out nuances for different platforms and jurisdictions. Companies invest varying amounts in content moderation efforts, he told CNBC, and there are issues over differing laws and regulations.

“So there’s a dual role here. There’s a role for platforms to take more responsibility, live up to their own terms and conditions, work with third parties like fact checkers,” he said.

“And then there’s the responsibility of government to really be clear what their expectations are … and then be very clear about what will happen if you don’t meet those expectations. And we haven’t yet gone to that stage yet.”

Related posts

38-year-old Home Depot worker changed careers to become a teacher

newsconquest

What’s likely to move the market in the next trading session

newsconquest

Joe Biden predicts Democratic odds will improve

newsconquest