My Blog
Technology

Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says

Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says
Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says


Images of Taylor Swift that had been generated by artificial intelligence and had spread widely across social media in late January probably originated as part of a recurring challenge on one of the internet’s most notorious message boards, according to a new report.

Graphika, a research firm that studies disinformation, traced the images back to one community on 4chan, a message board known for sharing hate speech, conspiracy theories and, increasingly, racist and offensive content created using A.I.

The people on 4chan who created the images of the singer did so in a sort of game, the researchers said — a test to see whether they could create lewd (and sometimes violent) images of famous female figures.

The synthetic Swift images spilled out onto other platforms and were viewed millions of times. Fans rallied to Ms. Swift’s defense, and lawmakers demanded stronger protections against A.I.-created images.

Graphika found a thread of messages on 4chan that encouraged people to try to evade safeguards set up by image generator tools, including OpenAI’s DALL-E, Microsoft Designer and Bing Image Creator. Users were instructed to share “tips and tricks to find new ways to bypass filters” and were told, “Good luck, be creative.”

Sharing unsavory content via games allows people to feel connected to a wider community, and they are motivated by the cachet they receive for participating, experts said. Ahead of the midterm elections in 2022, groups on platforms like Telegram, WhatsApp and Truth Social engaged in a hunt for election fraud, winning points or honorary titles for producing supposed evidence of voter malfeasance. (True proof of ballot fraud is exceptionally rare.)

In the 4chan thread that led to the fake images of Ms. Swift, several users received compliments — “beautiful gen anon,” one wrote — and were asked to share the prompt language used to create the images. One user lamented that a prompt produced an image of a celebrity who was clad in a swimsuit rather than nude.

Rules posted by 4chan that apply sitewide do not specifically prohibit sexually explicit A.I.-generated images of real adults.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative A.I. products, and new restrictions are seen as just another obstacle to ‘defeat,’” Cristina López G., a senior analyst at Graphika, said in a statement. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Ms. Swift is “far from the only victim,” Ms. López G. said. In the 4chan community that manipulated her likeness, many actresses, singers and politicians were featured more frequently than Ms. Swift.

OpenAI said in a statement that the explicit images of Ms. Swift were not generated using its tools, noting that it filters out the most explicit content when training its DALL-E model. The company also said it uses other safety guardrails, such as denying requests that ask for a public figure by name or seek explicit content.

Microsoft said that it was “continuing to investigate these images” and added that it had “strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.” The company prohibits users from using its tools to create adult or intimate content without consent and warns repeat offenders that they may be blocked.

Fake pornography generated with software has been a blight since at least 2017, affecting unwilling celebrities, government figures, Twitch streamers, students and others. Patchy regulation leaves few victims with legal recourse; even fewer have a devoted fan base to drown out fake images with coordinated “Protect Taylor Swift” posts.

After the fake images of Ms. Swift went viral, Karine Jean-Pierre, the White House press secretary, called the situation “alarming” and said lax enforcement by social media companies of their own rules disproportionately affected women and girls. She said the Justice Department had recently funded the first national helpline for people targeted by image-based sexual abuse, which the department described as meeting a “rising need for services” related to the distribution of intimate images without consent. SAG-AFTRA, the union representing tens of thousands of actors, called the fake images of Ms. Swift and others a “theft of their privacy and right to autonomy.”

Artificially generated versions of Ms. Swift have also been used to promote scams involving Le Creuset cookware. A.I. was used to impersonate President Biden’s voice in robocalls dissuading voters from participating in the New Hampshire primary election. Tech experts say that as A.I. tools become more accessible and easier to use, audio spoofs and videos with realistic avatars could be created in mere minutes.

Researchers said the first sexually explicit A.I. image of Ms. Swift on the 4chan thread appeared on Jan. 6, 11 days before they were said to have appeared on Telegram and 12 days before they emerged on X. 404 Media reported on Jan. 25 that the viral Swift images had jumped into mainstream social media platforms from 4chan and a Telegram group dedicated to abusive images of women. The British news organization Daily Mail reported that week that a website known for sharing sexualized images of celebrities posted the Swift images on Jan. 15.

For several days, X blocked searches for Taylor Swift “with an abundance of caution so we can make sure that we were cleaning up and removing all imagery,” said Joe Benarroch, the company’s head of business operations.

Related posts

Eat These 5 Foods to Naturally Boost Your Heart Health

newsconquest

Rams vs. Seahawks Livestream: How to Watch NFL Week 9 Online Today

newsconquest

Netflix: The 43 Absolute Perfect Films to Watch

newsconquest