In a strikingly detailed joint advisory, agencies in the United States, Netherlands and Canada identified various software programs used to manage the network, including one named Meliorator, which created fictitious users known as “souls” in various countries. The FBI won a court order allowing it to seize two web domains that the operation had used to register the email addresses behind the accounts.
“Today’s actions represent a first in disrupting a Russian-sponsored Generative AI-enhanced social media bot farm,” FBI Director Christopher A. Wray said in a statement. “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.”
Automated accounts with more detailed biographies posted original content, while a supporting cast of more generic accounts liked and reshared those posts. Officials did not respond to questions about how many real users saw the posts and whether any of them spread the messages further, so it is unclear how effective the campaign was.
GET CAUGHT UP
Stories to keep you informed
The system evaded one of X’s techniques for verifying the authenticity of users by automatically copying onetime passcodes sent to the registered email addresses. References to Facebook and Instagram in the program code indicated that the operation intended to expand to those platforms, officials said.
The agencies recommended that social media companies improve their methods for catching covertly automated behavior.
X complied with a court order to furnish information on the accounts to the FBI, then deleted them. The company did not respond to questions from The Washington Post.
The Justice Department thanked X for its cooperation during the investigation, a sign of better communications between the government and the big social media companies after the Supreme Court upheld the right of officials to point out foreign influence operations.
John Scott-Railton, a researcher at the Canadian nonprofit Citizen Lab, said that the countries provided such detailed information about the inner workings of the botnet to help other investigators and companies know what to look for.
“They don’t think this problem is going anywhere, so they are sharing widely,” Scott-Railton said.
The documents show that AI’s large language models have helped Russian propagandists scale their operation and help with translation, he said. It also helps them avoid detection software that looks for repeated use of the same internet protocol addresses and other identifiers.
But many others systems are operating already, and they will get better as they adapt for what is getting detected and what is getting by, said Scott-Railton: “This isn’t even the tip of the iceberg. This is the drip of the iceberg.”