My Blog
Technology

How Your Child’s Online Mistake Can Ruin Your Digital Life

How Your Child’s Online Mistake Can Ruin Your Digital Life
How Your Child’s Online Mistake Can Ruin Your Digital Life


When Jennifer Watkins got a message from YouTube saying her channel was being shut down, she wasn’t initially worried. She didn’t use YouTube, after all.

Her 7-year-old twin sons, though, used a Samsung tablet logged into her Google account to watch content for children and to make YouTube videos of themselves doing silly dances. Few of the videos had more than five views. But the video that got Ms. Watkins in trouble, which one son made, was different.

“Apparently it was a video of his bottom,” said Ms. Watkins, who has never seen it. “He’d been dared by a classmate to do a nudie video.”

Google-owned YouTube has A.I.-powered systems that review the hundreds of hours of video that are uploaded to the service every minute. The scanning process can sometimes go awry and tar innocent individuals as child abusers.

The New York Times has documented other episodes in which parents’ digital lives were upended by naked photos and videos of their children that Google’s A.I. systems flagged and that human reviewers determined to be illicit. Some parents have been investigated by the police as a result.

The “nudie video” in Ms. Watkins’s case, uploaded in September, was flagged within minutes as possible sexual exploitation of a child, a violation of Google’s terms of service with very serious consequences.

Ms. Watkins, a medical worker who lives in New South Wales, Australia, soon discovered that she was locked out of not just YouTube but all her accounts with Google. She lost access to her photos, documents and email, she said, meaning she couldn’t get messages about her work schedule, review her bank statements or “order a thickshake” via her McDonald’s app — which she logs into using her Google account.

Her account would eventually be deleted, a Google login page informed her, but she could appeal the decision. She clicked a Start Appeal button and wrote in a text box that her 7-year-old sons thought “butts are funny” and were responsible for uploading the video.

“This is harming me financially,” she added.

Children’s advocates and lawmakers around the world have pushed technology companies to stop the online spread of abusive imagery by monitoring for such material on their platforms. Many communications providers now scan the photos and videos saved and shared by their users to look for known images of abuse that had been reported to the authorities.

Google also wanted to be able to flag never-before-seen content. A few years ago, it developed an algorithm — trained on the known images — that seeks to identify new exploitative material; Google made it available to other companies, including Meta and TikTok.

Once an employee confirmed that the video posted by Ms. Watkins’s son was problematic, Google reported it to the National Center for Missing and Exploited Children, a nonprofit that acts as the federal clearinghouse for flagged content. The center can then add the video to its database of known images and decide whether to report it to local law enforcement.

Google is one of the top reporters of “apparent child pornography,” according to statistics from the national center. Google filed more than two million reports last year, far more than most digital communications companies, though fewer than the number filed by Meta.

(It is hard to judge the severity of the child abuse problem from the numbers alone, experts say. In one study of a small sampling of users flagged for sharing inappropriate images of children, data scientists at Facebook said more than 75 percent “did not exhibit malicious intent.” The users included teenagers in a romantic relationship sharing intimate images of themselves, and people who shared a “meme of a child’s genitals being bitten by an animal because they think it’s funny.”)

Apple has resisted pressure to scan the iCloud for exploitative material. A spokesman pointed to a letter that the company sent to an advocacy group this year, expressing concern about the “security and privacy of our users” and reports “that innocent parties have been swept into dystopian dragnets.”

Last fall, Google’s trust and safety chief, Susan Jasper, wrote in a blog post that the company planned to update its appeals process to “improve the user experience” for people who “believe we made wrong decisions.” In a major change, the company now provides more information about why an account has been suspended, rather than a generic notification about a “severe violation” of the company’s policies. Ms. Watkins, for example, was told that child exploitation was the reason she had been locked out.

Regardless, Ms. Watkins’s repeated appeals were denied. She had a paid Google account, allowing her and her husband to exchange messages with customer service agents. But in digital correspondence reviewed by The Times, the agents said the video, even if a child’s oblivious act, still violated company policies.

The draconian punishment for one silly video seemed unfair, Ms. Watkins said. She wondered why Google couldn’t give her a warning before cutting off access to all her accounts and more than 10 years of digital memories.

After more than a month of failed attempts to change the company’s mind, Ms. Watkins reached out to The Times. A day after a reporter inquired about her case, her Google account was restored.

“We do not want our platforms to be used to endanger or exploit children, and there’s a widespread demand that internet platforms take the firmest action to detect and prevent CSAM,” the company said in a statement, using a widely used acronym for child sexual abuse material. “In this case, we understand that the violative content was not uploaded maliciously.” The company had no response for how to escalate a denial of an appeal beyond emailing a Times reporter.

Google is in a difficult position trying to adjudicate such appeals, said Dave Willner, a fellow at Stanford University’s Cyber Policy Center who has worked in trust and safety at several large technology companies. Even if a photo or video is innocent in its origin, it could be shared maliciously.

“Pedophiles will share images that parents took innocuously or collect them into collections because they just want to see naked kids,” Mr. Willner said.

The other challenge is the sheer volume of potentially exploitative content that Google flags.

“It’s just a very, very hard-to-solve problem regimenting value judgment at this scale,” Mr. Willner said. “They’re making hundreds of thousands, or millions, of decisions a year. When you roll the dice that many times, you are going to roll snake eyes.”

He said Ms. Watkins’s struggle after losing access to Google was “a good argument for spreading out your digital life” and not relying on one company for so many services.

Ms. Watkins took a different lesson from the experience: Parents shouldn’t use their own Google account for their children’s internet activity, and should instead set up a dedicated account — a choice that Google encourages.

She has not yet set up such an account for her twins. They are now barred from the internet.

Related posts

Best Prime Day Monitor Deals: Get Discounts on Acer, Dell, Samsung and More at Amazon

newsconquest

In the hunt for Drugs, Younger Other people Head to Social Media, With Fatal Effects

newsconquest

‘Smartphones on Wheels’ Draw Attention From Regulators

newsconquest