My Blog
Technology

Meta layoffs add fear of misinformation surging on Facebook, Instagram

Meta layoffs add fear of misinformation surging on Facebook, Instagram
Meta layoffs add fear of misinformation surging on Facebook, Instagram


Three months ago, Meta touted recent wins in detecting the increasingly robust campaigns by Russian-based operatives to covertly influence world affairs in the wake of the war in Ukraine.

President of Global Affairs Nick Clegg told reporters during a press briefing that despite slashing spending as the company lays off thousands, Meta was well positioned to keep up the fight against foreign influence campaigns, misinformation and other problematic content.

Meta could both become more efficient and offer “the industry’s leading integrity and safety measures across the company,” he said.

Now, Clegg’s optimism is being put to the test as the social media giant prepares this month to hand out a new round of pink slips to workers in its business divisions, including teams that handle content moderation, policy and regulatory issues.

Affected employees are expected to be notified Wednesday, according to a person familiar with the matter, part of a months-long downsizing effort.

How Mark Zuckerberg broke Meta’s workforce

But at least a half-dozen current and former Meta employees who have worked on trust and safety issues say severe cuts in those divisions could hamper the company’s ability to respond to viral political misinformation, foreign influence campaigns and regulatory challenges. They say they worry that the layoffs — which are expected to hit the company’s business division harder than engineering — could make Facebook, Instagram and WhatsApp more dangerous at a particularly acute time of geopolitical concern.

The company will be grappling next year with dozens of elections — including the U.S. presidential vote — while covert influence campaigns from Russia and China have grown more aggressive in their attempts to sway politics in other regions of the world. Governments around the world are passing laws to demand social media companies bend to their political will on content.

“The problem of course is that simultaneously we are going into the election season not just here in the United States, but everywhere else in the world,” said Bhaskar Chakravorti, dean of global business at the Fletcher School at Tufts University. “2024 is going to be a massive year for elections and concerns about content moderation peak during election years.”

Meta spokesperson Andy Stone said in a statement, “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community.”

Meta is part of a larger trend among tech giants that are cutting back amid a downturn in the industry, joining Google, Amazon and Microsoft in trimming tens of thousands of jobs.

Amazon employees plan to walk off the job as tech worker tension rises

Meta, in particular, has faced an uptick in competition for advertising dollars and users from the short-form video platform TikTok, while new privacy rules adopted by Apple and broader economic challenges have hurt Meta’s digital advertising business. The company spent heavily over the past few years to build the so-called Metaverse, a term used to describe immersive digital realms that are accessed through augmented and virtual reality.

In total, it said it’s eliminating 21,000 roles, including the latest round of 10,000. Founder Mark Zuckerberg has deemed 2023 to be the “year of efficiency” as he seeks to remake the company to be leaner and fatter.

Meta began bolstering its teams that handle policy and safety issues in particular after negative publicity over the 2018 Cambridge Analytica scandal, when a political consultancy accessed the personal data of millions of Facebook users, and revelations that Russia used the company’s social networks to influence the 2016 presidential election. Since then, Meta has beefed up its lobbying, content moderation and fact-checking program.

Meta’s artificial intelligence-powered systems and a small army of contract workers handle most of the company’s decisions to take down or leave up certain posts or accounts. But the full-time workers on its trust and safety teams play a critical role in shaping the company’s response to complex political and cultural situations around the world, said the current and former employees, most of whom spoke on the condition of anonymity to discuss internal matters.

A variety of teams work on everything from defining what is rule-breaking content like hate speech to training the algorithms that help flag content to the company’s thousands of third-party contract workers for review. Investigators look for foreign influence campaigns or crime, while others manage relationships with countries and politicians — all of whom help protect the site from further disaster.

In Ukraine, Facebook fact-checkers fight a war on two fronts

“The nightmare scenario is a steady degradation of the processes that nobody thinks about because they work,” said one employee.

Meta said it cut about 4,000 workers from its tech divisions last month, The Post has reported, which means that roughly 6,000 workers in nontechnical roles will have likely been laid off by the end of the third wave of cuts. Zuckerberg said in March that he wanted to return the company to “a more optimal ratio of engineers to other roles.”

“It’s important for all groups to get leaner and more efficient to enable our technology groups to get as lean and efficient as possible,” he said in a statement at the time. “We will make sure we continue to meet all our critical and legal obligations as we find ways to operate more efficiently.”

Meta’s cuts may mean it has fewer people to handle emerging threats or complex policy questions that they don’t want to defer to its artificial intelligence-powered systems and lower-level content moderation contractors.

Facebook’s ban on gun sales gives sellers 10 strikes before booting them

Brian Fishman, who once led Meta’s efforts to fight terrorism and hate organizations on the social networks, said the company was often too slow to make important policy decisions.

For instance, Meta employees debated for years how to handle users who use the Arabic term shaheed, which is often translated to mean martyr, to describe terrorists or other groups of people on its dangerous organizations and individuals list, he said. The company bans explicit praise for anyone or group on its dangerous individuals and organizations list.

Earlier this year, Meta requested that the Oversight Board, an independent group of experts that reviews the company’s content moderation decisions, give it an opinion on the matter. Fishman said that’s a welcome development in the company’s years-long debate.

“One of the reasons I left is because that organization had gotten slow and bloated and it was very hard to make a decision,” he said.

Meta plans to lay off 10,000 workers, with cuts beginning in HR

Zuckerberg argues that the company’s November workforce cuts proved that having fewer people means Meta can make decisions more quickly and more nimbly. He said that he underestimated the “indirect costs” of adding projects that seemed to bring value to the company but ultimately took away from other priorities.

Meta will need expertise as authoritarian governments increasingly issue legal demands asking for certain types of content to be taken down or pass new rules that aim to give them more control over the company’s content moderation practices, some of the employees said.

And regional expertise could be particularly helpful next year when there are at least 65 elections in 54 countries including in the United States and India in which content has the power to spark offline harm, said Katie Harbath, a former policy director at Meta and current tech policy consultant.

“You’re going to have these candidates really pushing the envelope when it comes to stoking potential violence or potential hate speech,” she said.

Facebook pivoted to the metaverse. Now it wants to show off its AI.

Recent innovations in generative artificial intelligence, a form of the technology that can produce humanlike original content, means it’s now much easier to create sophisticated fake images or text that could stoke political confusion during volatile news environments.

Major technology companies such as Meta have not substantially altered their policies against “deepfakes” since 2019. While Meta has said it is making a big push to “turbocharge” its investment in artificial intelligence, the company has already laid off employees who worked on policy issues related to artificial intelligence, according to two people familiar with the matter who spoke on the condition of anonymity to discuss sensitive internal discussions.

“I think we’re going to see misinformation and disinformation on a level and a scale that we haven’t really seen in the past because the ease of producing that kind of material is just accelerating,” said Fishman. “Capacity at these companies is really, really critical.”

Related posts

Social Security Cheat Sheet: What You Need to Know About Benefits, Checks and Taxes

newsconquest

Very best Purchase’s Newest Sale Gives 72 Hours of Offers on Apple, Bowflex, Echo and Extra

newsconquest

Twitter Expands Fact-Checking Project Birdwatch to the Entire US

newsconquest