My Blog
Technology

Google Changes Appeals Process for Suspected Child Abuse Images

Google Changes Appeals Process for Suspected Child Abuse Images
Google Changes Appeals Process for Suspected Child Abuse Images


When Google informed a mother in Colorado that her account had been disabled, it felt as if her house had burned down, she said. In an instant, she lost access to her wedding photos, videos of her son growing up, her emails going back a decade, her tax documents and everything else she had kept in what she thought would be the safest place. She had no idea why.

Google refused to reconsider the decision in August, saying her YouTube account contained harmful content that might be illegal. It took her weeks to discover what had happened: Her 9-year-old eventually confessed that he had used an old smartphone of hers to upload a YouTube Short of himself dancing around naked.

Google has an elaborate system, involving algorithmic monitoring and human review, to prevent the sharing and storing of exploitative images of children on its platforms. If a photo or video uploaded to the company’s servers is deemed to be sexually explicit content featuring a minor, Google disables the user’s account, across all of Google’s services, and reports the content to a nonprofit that works with law enforcement. Users have an opportunity to challenge Google’s action, but in the past they had no real opportunity to provide context for a nude photo or video of a child.

Now, after reporting by The New York Times, Google has changed its appeals process, giving users accused of the heinous crime of child sexual exploitation the ability to prove their innocence. The content deemed exploitative will still be removed from Google and reported, but the users will be able to explain why it was in their account — clarifying, for example, that it was a child’s ill-thought-out prank.

Susan Jasper, Google’s head of trust and safety operations, said in a blog post that the company would “provide more detailed reasons for account suspensions.” She added, “And we will also update our appeals process to allow users to submit even more context about their account, including to share more information and documentation from relevant independent professionals or law enforcement agencies to aid our understanding of the content detected in the account.”

In recent months The Times, reporting on the power that technology companies wield over the most intimate parts of their users’ lives, brought to Google’s attention several instances when its previous review process appeared to have gone awry.

In two separate cases, fathers took photos of their naked toddlers to facilitate medical treatment. An algorithm automatically flagged the images, and then human moderators deemed them in violation of Google’s rules. The police determined that the fathers had committed no crime, but the company still deleted their accounts.

The fathers, one in California and the other in Texas, found themselves stymied by Google’s previous appeals process: At no point were they able to provide medical records, communications with their doctors or police documents absolving them of wrongdoing. The father in San Francisco eventually got six months of his Google data back, but on a thumb drive from the Police Department, which had gotten it from the company with a warrant.

“When we find child sexual abuse material on our platforms, we remove it and suspend the related account,” a Google spokesman, Matt Bryant, said in a statement. “We take the implications of suspending an account seriously, and our teams work constantly to minimize the risk of an incorrect suspension.”

Technology companies that offer free services to consumers are notoriously bad at customer support. Google has billions of users. Last year, it disabled more than 270,000 accounts for violating its rules against child sexual abuse material. In the first half of this year, it disabled more than it did in all of 2021.

“We don’t know what percentage of those are false positives,” said Kate Klonick, an associate professor at St. John’s University School of Law who studies internet governance issues. Even just 1 percent would result in hundreds of appeals per month, she said. She predicted that Google would need to expand its trust and safety team to handle the disputes.

“It seems like Google is making the right move,” Ms. Klonick said, “to adjudicate and solve for false positives. But it’s an expensive proposition.”

Evelyn Douek, an assistant professor at Stanford Law School, said she would like Google to provide more details about how the new appeals process would work.

“Just the establishment of a process doesn’t solve everything. The devil is in the details,” she said. “Is the new review meaningful? What is the timeline?”

A Colorado mother eventually received a warning on YouTube saying her content violated community guidelines. Credit…YouTube

It took four months for the mother in Colorado, who asked that her name not be used to protect her son’s privacy, to get her account back. Google reinstated it after The Times brought the case to the company’s attention.

“We understand how upsetting it would be to lose access to your Google account, and the data stored in it, due to a mistaken circumstance,” Mr. Bryant said in a statement. “These cases are extraordinarily rare, but we are working on ways to improve the appeals process when people come to us with questions about their account or believe we made the wrong decision.”

Google did not tell the woman that the account was active again. Ten days after her account had been reinstated, she learned of the decision from a Times reporter.

When she logged in, she found that everything had been restored beyond the video her son had made. A message popped up on YouTube, featuring an illustration of a referee blowing a whistle and saying her content had violated community guidelines. “Because it’s the first time, this is just a warning,” the message said.

“I wish they had just started here in the first place,” she said. “It would have saved me months of tears.”

Jason Scott, a digital archivist who wrote a memorably profane blog post in 2009 warning people not to trust the cloud, said companies should be legally obligated to give users their data, even when an account was closed for rule violations.

“Data storage should be like tenant law,” Mr. Scott said. “You shouldn’t be able to hold someone’s data and not give it back.”

The mother also received an email from “The Google Team,” sent on Dec. 9.

“We understand that you attempted to appeal this several times, and apologize for the inconvenience this caused,” it said. “We hope you can understand we have strict policies to prevent our services from being used to share harmful or illegal content, especially egregious content like child sexual abuse material.”

Many companies besides Google monitor their platforms to try to prevent the rampant sharing of child sexual abuse images. Last year, more than 100 companies sent 29 million reports of suspected child exploitation to the National Center for Missing and Exploited Children, the nonprofit that acts as the clearinghouse for such material and passes reports on to law enforcement for investigation. The nonprofit does not track how many of those reports represent true abuse.

Meta sends the highest volume of reports to the national center — more than 25 million in 2021 from Facebook and Instagram. Last year, data scientists at the company analyzed some of the flagged material and found examples that qualified as illegal under federal law but were “non-malicious.” In a sample of 150 flagged accounts, more than 75 percent “did not exhibit malicious intent,” said the researchers, giving examples that included a “meme of a child’s genitals being bitten by an animal” that was shared humorously and teenagers sexting each other.

Related posts

Move Over Threads, TikTok is Taking Its Spin on Text Posts

newsconquest

Best Desktop Computers for 2023: Apple, Dell, HP and More

newsconquest

‘Warzone 2.0′s’ Gulag, proximity chat, new modes and more

newsconquest