The three celebrities — with large followings in their respective realms — started a week of high-profile reactions to AI-altered images that underscore the increasing contention regarding the use of artificial intelligence and its ability to reproduce people’s likenesses regardless of their consent.
Gayle King of CBS News posted a warning to Instagram on Monday, sharing a snippet of a video that used her likeness for a purported weight loss product.
“People keep sending me this video and asking about this product and I have NOTHING to do with this company,” King wrote, stamping the words “fake video” across her AI depiction. “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”
Representatives of King “have requested that the fake video be taken down several times,” said Samantha Graham, a CBS News spokesperson. “Gayle was made aware of this by friends who reached out to her about it.”
Academy Award winner Tom Hanks warned his 9.5 million followers of a similar scam Sunday.
“There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” Hanks posted to Instagram.
It is unclear what entities were behind the deepfakes, or false images purporting to be real, that featured King’s and Hanks’s doctored footage. Hanks’s representatives declined to respond to The Washington Post’s questions. His post did not name the alleged dental company that depicted his likeness, and he didn’t share the video. King’s post showed a logo for “Artipet,” for which a web search showed little online presence Tuesday evening.
The conversations, legal recourse and regulations around the technology remain murky, with few concrete laws in the United States or around the world targeting unauthorized AI-generated content.
Many companies are rethinking or reducing ethical AI research, often as part of broader cost-cutting, even as new applications of the technology are booming. Some schools have banned access to ChatGPT, an AI bot that can churn out responses or answers to students’ schoolwork in mere moments. AI images such as the ones Hanks and King called out are becoming harder to distinguish from real ones as tech companies improve their AI products, giving way to misinformation — in the case of these celebrities, false advertisements. The technology may take jobs from already disadvantaged groups as well.
It’s a sticking point in Hollywood, too. The use of AI is among the issues on the bargaining table between the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and major Hollywood studios. The actors union remains on a months-long strike against studios, even after the Writers Guild of America recently ended its strike.
SAG-AFTRA seeks protections for its members from having their likeness, voice or performances used without their consent or without compensation. In an FAQ about the strike authorization, the union said AI’s ability to mimic these creative expressions is a “real and immediate threat to the work of our members.” The guild also wants to prevent studios from being able to train AI to create performances from an actor’s existing work.
Hanks has expressed worry about the technology, too. In an interview with the Adam Buxton Podcast this year, the actor said AI allows fake versions of actors to proliferate — and, if allowed, the public may not know or care.
“Right now if I wanted to, I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old, from now until kingdom come. Anybody can now re-create themselves at any age they are by way of AI or deepfake technology,” Hanks told Buxton.
“I could be hit by a bus tomorrow, and that’s it, but performances can go on and on and on and on. And outside of the understanding that it’s been done with AI or deepfake, there’ll be nothing to tell you that it’s not me,” he said. “That’s certainly an artistic challenge, but it’s also a legal one.”
“We saw this coming. We saw that there was going to be this ability to take zeros and ones inside a computer and turn it into a face and a character. Now that has only grown a billionfold since then, and we see it everywhere,” Hanks added. “I can tell you that there [are] discussions going on in all of the guilds, all of the agencies, and all of the legal firms to come up with the legal ramifications of my face and my voice — and everybody else’s — being our intellectual property.”
Beyond Hollywood, social media stars are calling out the technology for its dubious impacts.
YouTube star MrBeast, whose real name is Jimmy Donaldson, shared an AI-generated video of himself Sunday. Like the ones of King and Hanks, the video depicted a false Donaldson, who has 188 million YouTube followers, making claims that the real Donaldson doesn’t endorse.
He called it a “deepfake scam ad.”
“If you’re watching this video, you’re one of the 10,000 lucky people who will get an iPhone 15 Pro for just $2,” the video advertisement told viewers. “I’m MrBeast and I’m doing the world’s largest iPhone 15 giveaway. Click the link below to claim yours now.”
Donaldson criticized the TikTok-based video on X, one of several sites that have struggled over the years to contain such misinformation. “Are social media platforms ready to handle the rise of AI deepfakes?” he asked. “This is a serious problem.”
Gerrit De Vynck contributed to this report.