As the war grinds toward its sixth month, however, Russian propaganda techniques have evolved — and the tech firms haven’t kept up.
Ukrainian officials who have flagged thousands of tweets, YouTube videos and other social media posts as Russian propaganda or anti-Ukrainian hate speech say the companies have grown less responsive to their requests to remove such content. New research shared with The Washington Post by a Europe-based nonprofit initiative confirms that many of those requests seem to be going unheeded, with accounts parroting Kremlin talking points, spewing anti-Ukrainian slurs or even impersonating Ukrainian officials remaining active on major social networks.
As a result, researchers say, Kremlin-backed narratives are once again propagating across Europe, threatening to undermine popular support for Ukraine in countries that it views as critical to its defense.
“When it was the first months of full-scale Russian aggression, [the U.S. tech companies] were very proactive, very interested to help,” said Mykola Balaban, deputy head of Ukraine’s Center for Strategic Communications and Information Security, a government agency. “Now they are avoiding to make a call with us.”
While some platforms, including Meta’s Facebook and Microsoft’s LinkedIn, have continued to correspond regularly with Balaban’s agency, he said Google-owned YouTube hasn’t returned its emails for almost two months.
Frustrated with the radio silence, Ukraine partnered in late April with independent researchers at the Disinformation Situation Center, a Europe-based coalition spanning multiple nonprofit organizations, to analyze the effectiveness of the platforms’ moderation efforts. The findings, provided to The Post ahead of their publication Thursday, appear to bear out at least some of Balaban’s concerns.
As Russian efforts shift from state media megaphones to individual influencers and “troll armies” coordinated via the messaging app Telegram, Ukrainian authorities and their nonprofit partners have been tracking and flagging posts that use derogatory or dehumanizing terms for Ukrainians as a way of justifying the war.
The report finds that upward of 70 percent of posts flagged as anti-Ukrainian hate speech on YouTube and Twitter remained available as of late June, while more than 90 percent of the accounts responsible for such posts remained active. Posts included slurs that blend the Russian words for “Ukrainian” and “baboon”; a tweet that translates to “Death to Bandera supporters, take no prisoners!,” a reference to the late Ukrainian nationalist leader Stepan Bandera intended to link Ukraine to Nazi Germany; and a YouTube comment in Russian that translates to, “Ukraine will be wiped off the face of the Earth, hurray!”
Facebook, YouTube and Twitter all have policies against glorifying Russia’s invasion or attacking Ukrainians based on their nationality, though they noted that it usually takes more than a single violation for the offending account to be suspended. Both YouTube and Twitter said they took action on some accounts after The Post brought them to the companies’ attention on Wednesday.
YouTube spokeswoman Ivy Choi did not directly address the company’s responsiveness to Ukraine’s takedown requests, but she said the company has “stayed in regular contact with the Ukrainian government” and has removed more than 70,000 videos and 9,000 channels for violating its policies since the war began.
Twitter spokeswoman Elizabeth Busby also didn’t directly address Ukrainian officials’ concerns, but said the company continues to work with outside organizations and monitors for policy violations. Busby added that Twitter’s policies go beyond a “leave up” vs. “take down” binary, including efforts to elevate credible information about the war and avoid recommending state media accounts or posts that may be misleading.
The report also finds that LinkedIn, a site better known for professional networking than politics, removed fewer than half of the posts that Ukrainian officials flagged as examples of Russian propaganda justifying the war. LinkedIn did not respond to a request for comment.
On the positive side, the researchers found that Facebook had removed all 98 of the posts the Ukrainian government and its partners flagged as containing anti-Ukrainian hate speech, though many of the accounts responsible remained active. (Facebook spokeswoman Erin McPike noted that the company’s policies generally don’t include a ban for first-time offenders, but do include escalating consequences for repeat offenders.) Facebook and its sister platform Instagram also appeared to be generally responsive to requests to take down accounts impersonating Ukrainian officials and advertisements spreading Kremlin talking points, though the researchers said they would prefer to see the platforms take a more proactive approach.
Of the more than 15,000 flagged posts in the study, the majority were tweets, perhaps because Twitter makes it easier for users to create multiple accounts, allows anonymity and has looser speech restrictions than its rivals. More than 1,000 of the posts and comments were from YouTube, while the numbers flagged on Facebook and Instagram were in the hundreds and on LinkedIn less than 100.
“I don’t think it’s bad will on the part of the tech companies,” said Felix Kartte, senior adviser for the global nonprofit advocacy group Reset Tech, which focuses on accountability for social media platforms, and a co-author of the report. “It’s really just lack of resources, lack of investment, lack of preparedness,” and a shortage of staff with Russian and Ukrainian language skills and local expertise.
The criticism isn’t novel. While the largest U.S. social media platforms have expanded their efforts to police their platforms globally in recent years, researchers and whistleblowers have consistently pointed out that they devote fewer resources and have less expertise in non-English-speaking regions. Companies including Facebook and Google have also been criticized since long before the latest Russian invasion for paying too little heed to Kremlin-backed disinformation campaigns in Ukraine, including operations as early as 2014 that foreshadowed Russian interference in the 2016 U.S. election.
But when Russia launched its invasion of Ukraine in late February, Ukraine suddenly had the world’s attention, including that of the tech companies. Under pressure from governments, the public and in some cases their own employees, tech firms rewrote their rule books to tackle Russian propaganda and protect Ukrainians online. Most notably, they blocked and downranked Russia’s state media outlets, such as Russia Today, which had amassed huge global followings on various online platforms.
The moves earned the politically embattled tech giants good press, but they also seemed to be making a difference. A March 16 analysis by The Post found that social media interactions with major Russian propaganda outlets spiked as the invasion began, then plummeted as the platforms took action.
Researchers and Ukrainian officials who spoke to The Post for this story agreed that has dented the Kremlin’s capacity to spread false narratives about the war. In retribution, the Russian government blocked Facebook, Instagram, and Twitter within its borders, though YouTube remains available.
But the tech giants’ attention has flagged over time, Ukrainian officials and researchers say, as headlines and public outrage in the United States and Western Europe have shifted from Russian aggression to domestic issues such as inflation, gas prices and, in Europe, the influx of Ukrainian refugees.
The Kremlin has taken note. With big state media accounts suspended or muffled, researchers say Russian leaders and influencers have shifted to the semiprivate messaging app Telegram to direct information campaigns via swarms of smaller accounts.
For instance, Balaban said he has seen signs of Russian influence in a spate of recent posts that seek to mislead Ukrainians about the safety of fleeing the country or sow division around the country’s military draft.
“This psychological game is very similar in methodology to 2014 or 2016,” Balaban said. “To find some problems in society, some cracks in society to exploit and create on that basis a conflict.”
Thierry Breton, the commissioner for the internal market of the European Union, who has targeted tech for regulation, said the report from the Disinformation Situation Center exemplifies the need for tough regulations to hold tech platforms accountable.
“The Russian disinformation war is a real invasion of our digital space,” Breton said in a statement to The Post. “The examples in the report show once again that big online platforms have taken insufficient measures to protect their users against this invasion. This has real life consequences across the whole world.”
Ukrainians aren’t the only audience for Russian propaganda, said Pia Lamberty, co-CEO of CeMAS, a German think tank that tracks online conspiracy theories and extremism. Pro-Russian influencers are also spreading disinformation and war denialism in Western Europe, aimed at undermining public support for costly measures such as sanctions on Russian oil or military support for Ukraine.
In Germany, they’re tapping into a small but growing segment of the population that has embraced right-wing politics and conspiracy theories about everything from the war in Ukraine to coronavirus vaccines, Lamberty said.
“Disinformation is not only successful if people believe what you say, but when they get undecided. Somebody who’s undecided, whether Ukraine is the victim of Russian aggression, or whether Russia had maybe a reason because [the Ukrainians] are maybe fascist, will be less supportive of Ukraine,” she said.
While tech firms responding to takedown requests is important, Lamberty added, what’s needed most is a more proactive, systematic approach to monitoring Russian propaganda networks across platforms. “As soon as you need a fact check, you’re already too late,” she said.