My Blog
Technology

Facebook Failed to Stop Ads Threatening Election Workers

Facebook Failed to Stop Ads Threatening Election Workers
Facebook Failed to Stop Ads Threatening Election Workers


Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.

Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.

Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.

TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.

The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.

In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.

“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”

The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.

But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.

“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”

In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.

“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote.

Related posts

Lawyers for Elon Musk and Twitter are set to square off in court for first time

newsconquest

Sony’s ULT Field 1 Bluetooth Speaker Just Hit an All-Time Low Price in 4 Stunning Colors

newsconquest

Messaging app Signal hires former Google organizer Meredith Whittaker

newsconquest