- One of the most viral explicit posts depicting Swift received more than 45 million views, the Verge reported, before X removed it. The images probably surfaced in a Telegram channel that produces similar images, according to 404 Media.
- In an effort to drown out searches for the explicit images on the social media platforms, Swift’s fans posted widely with the phrase, “Protect Taylor Swift.”
- Representatives for Swift and X, formerly known as Twitter, did not immediately respond to The Washington Post’s requests for comment Friday morning.
Deepfakes are lifelike fake videos or images created with face- and audio-swapping technology. They often go viral on social platforms and have improved at replicating a person’s voice. That’s happened as the tools to identify an AI-made image have struggled to keep up, making it more difficult for those platforms to identify problematic videos.
Celebrities have warned followers not to be duped by the deepfakes.
Easy access to AI imaging technology has created a new tool to target women, allowing almost anyone to produce and share nude images of them.
Ahead of the 2024 presidential election, AI is also giving politicians excuses to dismiss potentially damaging pieces of evidence as fakes generated by AI. That’s happening at the same time as real AI deepfakes are being used to spread misinformation.
There is no federal law that makes it illegal to create deepfakes nationally, though some lawmakers have introduced bills to try.
In Congress, Rep. Joseph Morelle (D-N.Y.) has introduced a bill called the Preventing Deepfakes of Intimate Images Act, which he says would make creating those types of videos a federal crime.
“The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it’s happening to women everywhere, every day,” Morelle wrote Thursday on X.
In a news conference Friday, White House spokeswoman Karine Jean-Pierre said the Biden administration was “alarmed” by the proliferation. She said social media companies, she said, should “prevent the spread of misinformation and nonconsensual intimate imagery of real people.”
President Biden issued an executive order in October that a White House statement said would establish standards and best practices for detected AI-generated content. But it stopped short of requiring companies to label AI-generated photos, videos and audio.
The federal government, though, has been slow to act, some state lawmakers say. In response, states are trying to lead the way on implementing guardrails against AI, with some enacting measures to protect against the use of deepfakes in elections, including Texas and California. Georgia and Virginia, among others, have banned the creation of nonconsensual deepfake pornography.
Still, some argue that banning all types of the fake videos could violate the First Amendment, since the videos are “technically forms of expression,” according to the Princeton Legal Journal. Consensual fake pornographic videos, though, would not be protected under exceptions to the First Amendment that include libel, defamation and profanity, the journal wrote.
Some legal scholars also warn that fake AI images may not fall under copyright protections since they draw from data sets of millions of images.
Between December 2018 and December 2020, the number of fake videos detected online doubled every six months, according to Sensity AI, which tracks deepfakes. Sensity found that at least 90 percent of the videos were nonconsensual porn, most of which involve altered footage of women.
Researchers found that in 2023 more than 143,000 videos receiving more than 4.2 billion views were uploaded to the 40 most popular websites for faked videos as AI-generated porn ballooned across the internet, The Washington Post reported.
But as the technology has improved, detecting deepfakes has grown increasingly more difficult. Google’s policies prevent nonconsensual sexual images from appearing in search results, but deepfake porn can still show up on search engines. Some companies have created tools to try to detect if a video has been AI-generated, but those are not perfect.
Researchers at the Massachusetts Institute of Technology found that humans and machines can identify the fake images at a similar rate, and that both make mistakes when doing so. Those researchers suggested humans pay close attention to people’s faces in images and videos they think could be fake, including an individual’s glasses, facial hair and the rate of blinking, which all might look abnormal.
The images of Swift spread widely on X, which dismantled much of its moderation shortly after Elon Musk took over the platform.
In a statement released overnight, X said it had removed the Swift images, though it did not identify her.
“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” the statement said. “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”
In response to The Post’s email requesting comment Friday morning, an automated message said: “Busy now, please check back later.”
Samantha Chery contributed to this report.