My Blog
Technology

Inside Meta’s struggle to make Instagram, Facebook safer for teens


For years, Meta touted its efforts to recruit and retain younger users, as they flocked to competing social media apps such as Snapchat and TikTok. This push for young users was not deterred even after a coalition of state attorneys general launched a probe scrutinizing the impact of the company’s social networks on young people’s mental health.

But inside Meta, services designed to attract children and teens were often plagued by thorny debates, as staffers clashed about the best way to foster growth while protecting vulnerable youth, according to internal documents viewed by The Washington Post and current and former employees, some of whom spoke on the condition of anonymity to describe internal matters.

Staffers said some efforts to measure and respond to issues they felt were harmful, but didn’t violate company rules, were thwarted. Company leaders sometimes failed to respond to their safety concerns or pushed back against proposals they argued would hurt user growth. The company has also reduced or decentralized teams dedicated to protecting users of all ages from problematic content.

The internal dispute over how to attract kids to social media safely returned to the spotlight Tuesday when a former senior engineering and product leader at Meta testified during a Senate hearing on the connection between social media and teens’ mental health.

Arturo Béjar spoke before a Senate judiciary subcommittee about how his attempts to convince senior leaders including Meta chief executive Mark Zuckerberg to adopt what he sees as bolder actions were largely rebuffed.

41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids

“I think that we are facing an urgent issue that the amount of harmful experiences that 13- to 15-year olds have on social media is really significant,” Béjar said in an interview ahead of the hearing. “If you knew at the school you were going to send your kids to that the rates of bullying and harassment or unwanted sexual advances were what was in my email to Mark Zuckerberg, I don’t think you would send your kids to the school.”

Meta spokesman Andy Stone said in a statement that every day “countless people inside and outside of Meta are working on how to help keep young people safe online.”

“Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online,” Stone said. “All of this work continues.”

Instagram and Facebook’s impact on kids and teens is under unprecedented scrutiny following legal actions by 41 states and D.C., which allege Meta built addictive features into its apps, and a suite of lawsuits from parents and school districts accusing platforms of playing a critical role in exacerbating the teen mental health crisis.

Amid this outcry, Meta has continued to chase young users. Most recently, Meta lowered the age limit for its languishing virtual reality products, dropping the minimum ages for its social app Horizon Worlds to 13 and its Quest VR headsets to 10.

Zuckerberg announced a plan to retool the company for young people in October 2021, describing a years-long shift to “make serving young adults their north star.”

This interest came as young people were fleeing the site. Researchers and product leaders inside the company produced detailed reports analyzing problems in recruiting and retaining youth, as revealed by internal documents surfaced by Meta whistleblower Frances Haugen. In one document, young adults were reported to perceive Facebook as irrelevant and designed for “people in their 40s or 50s.”

Meta doesn’t want to police the metaverse. Kids are paying the price.

“Our services have gotten dialed to be the best for the most people who use them rather than specifically for young adults,” Zuckerberg said in the October 2021 announcement, citing competition with TikTok.

But employees say debates over proposed safety tools have pitted the company’s keen interest in growing its social networks against its need to protect users from harmful content.

Instagram is touting safety features for teens. Mental health advocates aren’t buying it.

For instance, some staffers argued that when teens sign up for a new Instagram account it should automatically be private, forcing them to adjust their settings if they wanted a public option. But those employees faced internal pushback from leaders on the company’s growth team who argued such a move would hurt the platform’s metrics, according to a person familiar with the matter, who spoke on the condition of anonymity to describe internal matters.

They settled on an in-between option: When teens sign up, the private account option is pre-checked, but they are offered easy access to revert to the public version. Stone says that during internal tests, 8 out of 10 young people accepted the private default settings during sign-up.

“It can be tempting for company leaders to look at untapped youth markets as an easy way to drive growth, while ignoring their specific developmental needs,” said Vaishnavi J, a technology policy adviser who was Meta’s head of youth policy.

“Companies need to build products that young people can freely navigate without worrying about their physical or emotional well-being,” J added.

Facebook tries to minimize its own research ahead of hearings on children’s safety

In November 2020, Béjar, then a consultant for Meta, and members of Instagram’s well-being team came up with a new way to tackle negative experiences such as bullying, harassment and unwanted sexual advances. Historically, Meta has often relied on “prevalence rates,” which measure how often posts that violate the company’s rules slip through the cracks. Meta estimates prevalence rates by calculating what percentage of total views on Facebook or Instagram are views on violating content.

Béjar and his team argued prevalence rates often fail to account for harmful content that doesn’t technically violate the company’s content rules and mask the danger of rare interactions that are still traumatizing to users.

Instead, Béjar and his team recommended letting users define negative interactions themselves using a new approach: the Bad Experiences and Encounters Framework. It relied on users relaying experiences with bullying, unwanted advances, violence and misinformation among other harms, according to documents shared with The Washington Post. The Wall Street Journal first reported on these documents.

In reports, presentations and emails, Béjar presented statistics showing the number of bad experiences teen users had were far higher than prevalence rates would suggest. He exemplified the finding in an October 2021 email to Zuckerberg and Chief Operating Officer Sheryl Sandberg that described how his then 16-year-old daughter posted an Instagram video about cars and received a comment telling her to “Get back to the kitchen.”

“It was deeply upsetting to her,” Béjar wrote. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny.” Béjar said he got a response from Sandberg acknowledging the harmful nature of the comment, but Zuckerberg didn’t respond.

Facebook hits pause on Instagram Kids app amid growing scrutiny

Later Béjar made another push with Instagram head Adam Mosseri, outlining some alarming statistics: 13 percent of teens between the ages of 13 and 15 had experienced an unwanted sexual advance on Instagram in the last seven days.

In their meeting, Béjar said Mosseri appeared to understand the issues but said his strategy hasn’t gained much traction inside Meta.

Though the company still uses prevalence rates, Stone said user perception surveys have informed safety measures, including an artificial intelligence tool that notifies users when their comment may be considered offensive before it’s posted. The company says it reduces the visibility of potentially problematic content that doesn’t break its rules.

Meta’s attempts to recruit young users and keep them safe have been tested by a litany of organizational and market pressures, as safety teams — including those that work on issues related to kids and teens — have been slashed during a wave of layoffs.

Meta tapped Pavni Diwanji, a former Google executive who helped oversee the development of YouTube Kids, to lead the company’s youth product efforts. She was given a remit to develop tools to make the experience of teens on Instagram better and safer, according to people familiar with the matter.

But after Diwanji left Meta, the company folded those youth safety product efforts into another team’s portfolio. Meta also disbanded and dispersed its responsible innovation team — a group of people in charge of spotting potential safety concerns in upcoming products.

Stone says many of the team members have moved on to other teams within the company to work on similar issues.

Béjar doesn’t believe lawmakers should rely on Meta to make changes. Instead, he said Congress should pass legislation that would force the company to take bolder actions.

“Every parent kind of knows how bad it is,” he said. “I think that we’re at a time where there’s a wonderful opportunity where [there can be] bipartisan legislation.”

Cristiano Lima contributed reporting.



Related posts

Russians will be able to buy the iPhone 14

newsconquest

NGO learn about steered Meta to press forward with encryption enlargement

newsconquest

How to Watch the 2024 Puppy Bowl From Anywhere

newsconquest

Leave a Comment