The Federal Communications Commission on Thursday outlawed unwanted robocalls generated by artificial intelligence, amid growing concerns over election disinformation and consumer fraud facilitated by the technology.
The unanimous decision by the F.C.C. cited a three-decade-old law aimed at curbing junk phone calls, clarifying that A.I.-generated spam calls are also illegal. By doing so, the agency said it expanded the ability of states to prosecute creators of unsolicited spam robocalls.
“It seems like something from the far-off future, but it is already here,” the F.C.C. chairwoman, Jessica Rosenworcel, said in a statement. “Bad actors are using A.I.-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters.”
Concerns about the use of A.I. to replicate the voices of and images of politicians and celebrities has grown in recent months as the technology to recreate personas has taken off — particularly ahead of the U.S. presidential election in November.
Those concerns came to a head late last month, when thousands of voters received an unsolicited robocall from a faked voice of President Biden, instructing voters to abstain from voting in the first primary of the election season. The state attorney general office announced this week that it had opened a criminal investigation into a Texas-based company it believes is behind the robocall. The caller ID was falsified to make it seem as if the calls were coming from the former New Hampshire chairwoman of the Democratic Party.
A.I. has also been used to create deep-fake videos and ads mimicking the voices and images. That includes fake and unapproved videos of the actor Tom Hanks promoting dental plans and one with sexually explicit content of the singer Taylor Swift.
Lawmakers have called for legislation to ban A.I. deep fakes in political ads but no bills have gained traction in Congress. In the vacuum of federal legislation, more than a dozen states have passed laws curbing A.I. use in political ads.