My Blog
Technology

Facebook profits from White supremacy

Facebook profits from White supremacy
Facebook profits from White supremacy



Placeholder while article actions load

Last year, a Facebook page administrator put out a clarion call for new followers: They were looking for “the good ole boys and girls from the south who believe in white [supremacy].” The page — named Southern Brotherhood — was live on Tuesday afternoon and riddled with photos of swastikas and expressions of white power.

Facebook has long banned content referencing white nationalism. But a plethora of hate groups still populate the site, and the company boosts its revenue by running ads on searches for these pages.

A new report from the Tech Transparency Project, a nonprofit tech watchdog, found 119 Facebook pages and 20 Facebook groups associated with white supremacy organizations. Of 226 groups identified as white-supremacist organizations by the Anti-Defamation League, the Southern Poverty Law Center, and a leaked version of Facebook’s dangerous organizations and individuals list, more than a third have a presence on the platform, according to the study.

Facebook says it will now block white-nationalist, white-separatist posts

Released Wednesday and obtained exclusively by The Washington Post, the report found that Facebook continues to serve ads against searches for white-supremacist content, such as the phrases Ku Klux Klan and American Defense Skinheads, a longtime criticism of civil rights groups, who argue that the company prioritizes profits over the dangerous impact of such content.

The findings illustrate the ease with which bigoted groups can evade Facebook’s detection systems, despite the company’s years-long ban against posts that attack people on the basis of their race, religion, sexual orientation and other characteristics.

Activists have charged that by allowing hate speech to proliferate across its networks, Facebook opens the door for extremist groups to organize deadly attacks on marginalized groups. In the wake of several high-profile incidents in which alleged mass shooters shared prejudiced beliefs on social media, the findings add to the pressure on Facebook to curb such content.

“The people who are creating this content have become very tech savvy, so they are aware of the loopholes that exist and they’re using it to keep posting content,” said Libby Hemphill, an associate professor at the University of Michigan. “Platforms are often just playing catch up.”

As young gunmen turn toward new social networks, old safeguards fail

Facebook, which last year renamed itself Meta, said it was conducting a comprehensive review of its systems to make sure ads no longer show up in search results related to banned organizations.

“We immediately began resolving an issue where ads were appearing in searches for terms related to banned organizations,” Facebook spokesperson Dani Lever said in a statement. “We will continue to work with outside experts and organizations in an effort to stay ahead of violent, hateful, and terrorism-related content and remove such content from our platforms.”

Facebook bars posts that attack people on the basis of their race, religion and sexual orientation, including any dehumanizing language or harmful stereotypes. In recent years, the company has expanded its hate speech policy to include calls for white separatism or the promotion of white nationalism. It also bans posts designed to incite violence.

For years, Facebook has faced criticism from civil rights activists, politicians and academics that it wasn’t doing enough to fight racism and discriminatory content on its platform.

“The stakes of inaction continue to be life-and-death,” said Color of Change President Rashad Robinson, whose group will release a petition Wednesday calling on Facebook to strengthen its systems to fight hateful content.

Activists have particularly clashed with the company, arguing in public and private conversations that Facebook’s enforcement of its hate speech ban is weak and that the company allows powerful people to violate its rules with few consequences.

In the summer of 2020, more than 1, o00 companies joined an advertiser boycott to push Facebook to rid its social networks of hateful content such as white supremacy groups. In response, Facebook executives repeatedly said the company doesn’t allow hate speech on its platform or seek to profit from bigotry.

Yet internally, Facebook’s own researchers found that hate speech reports surged that summer, in the wake of the widespread outrage over a police officer’s killing of George Floyd in Minnesota, according to a trove of internal documents surfaced by Facebook whistleblower Frances Haugen. That same summer, an independent civil rights audit offered a searing critique of the platform, arguing that Facebook’s hate speech policies were a “tremendous setback” when it came to protecting its users of color.

“The civil rights community continues to express significant concern with Facebook’s detection and removal of extremist and white nationalist content and its identification and removal of hate organizations,” the auditors wrote.

Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show

The auditor relied on a 2020 report from the Tech Transparency Project, which found that more than 100 groups identified by the Southern Poverty Law Center or the Anti-Defamation League as white-supremacist organizations had a presence on Facebook.

The Tech Transparency Project, which is part of political watchdog group Campaign for Accountability, has conducted several critical reports on Facebook’s content moderation systems.

For its 2022 report, the Tech Transparency Project examined white supremacy groups on Facebook’s list of dangerous individuals and organizations that was previously published by the investigative news site the Intercept. Among the groups on Facebook’s own list, nearly half of them had a presence on the social media network, the report found.

Moreover, the researchers suggested that Facebook’s automated systems, which scan for images, text and video that look like they could violate its policies among other actions, may in some cases fuel hate speech on the platform. Facebook automatically creates profile pages when a user lists a job, interest or business that does not have an existing page. Twenty percent of the 119 Facebook pages dedicated to white-supremacist groups identified in the report were estimated to be auto-generated by the company itself, according to the report.

For example, Facebook automatically generated a page for “Pen1 Death Squad,” which is shorthand for the white-supremacist gang “Public Enemy Number 1,” the report said.

Lever said the company is “working to fix an auto generation issue, which incorrectly impacted a small number of pages.”

Facebook has repeatedly said the vast majority of offensive content it takes down was first identified by its artificial intelligence systems, though it often declines to say how many offensive posts remain on its social networks.

Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

The Tech Transparency Project also searched Facebook for the names of the 226 white supremacy organizations. In more than 40 percent of those searches, Facebook served an advertisement from a wide range of marketers, including Walmart and Black churches. On Facebook’s own list of dangerous organizations and individuals, 39 percent of searches yielded advertisements.

In July 2020, many of Facebook’s advertisers pushed back against the persistence of bigotry on its platform. Color of Change, the Anti-Defamation League, and the NAACP and other civil rights groups organized an advertiser boycott to call on the company to take stronger measures against hate speech.

Companies including Verizon, Ben and Jerry’s and Best Buy joined the campaign to demand that Facebook “find and remove public and private groups focused on white supremacy.”

Two years later, activists are still calling for change. Color of Change’s Robinson said that after years of negotiating with Facebook, activists are increasingly pushing Congress to force the company to take bolder actions.

Hate speech lawsuit against Facebook gains powerful ally: D.C.’s attorney general

“I can’t keep going to meetings with billionaires thinking that something is going to happen because in the end, there’s already a power imbalance,” he said. “At the end of the day, they get to decide whether or not they do it.”

Related posts

Netflix to Eliminate Its Basic, Ad-Free Subscription This Year

newsconquest

Today’s NYT Wordle Hints, Answer and Help for Dec. 8, #1268

newsconquest

Elon Musk Sues OpenAI and Sam Altman for Violating the Company’s Principles

newsconquest