A multistate task force is also preparing for potential civil litigation against the company, and the Federal Communications Commission ordered Lingo Telecom to stop permitting illegal robocall traffic, after an industry consortium identified that Texas-based company as an originating source of the calls.
Formella said the actions were intended to serve notice that New Hampshire and other states will take action if they use AI to interfere in elections.
“Don’t try it,” he said. “If you do, we will work together to investigate, we will work together with partners across the country to find you, and we will take any enforcement action available to us under the law. The consequences for your actions will be severe.”
New Hampshire is issuing subpoenas to Life Corp., Lingo Telecom and other individuals and entities that may have been involved in the calls, Formella said.
Life Corp., its owner Walter Monk and Lingo Telecom did not immediately respond to requests for comment.
The announcement foreshadows a new challenge for state regulators, as increasingly advanced AI tools create new opportunities to meddle in elections across the world by creating fake audio recordings, photos and even videos of candidates, muddying the waters of reality.
The robocalls were an early test of a patchwork of state and federal enforcers, who are largely relying on election and consumer protection laws enacted before generative AI tools were widely available to the public.
The criminal investigation was announced more than two weeks after reports of the calls surfaced, underscoring the challenge for state and federal enforcers to move quickly in response to potential election interference.
“When the stakes are this high, we don’t have hours and weeks,” said Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. “The reality is, the damage will have been done.”
In late January, between 5,000 and 20,000 people received AI-generated phone calls impersonating Biden that told them not to vote in the state’s primary. The call told voters: “It’s important that you save your vote for the November election.” It was still unclear how many people might not have voted based on these calls, Formella said.
A day after the calls surfaced, Formella’s office announced they would investigate the matter. “These messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters,” he said in a statement. “New Hampshire voters should disregard the content of this message entirely.”
Despite the action, Formella did not provide information about which company’s software was used to create the AI-generated robocall of Biden.
Farid said the sound recording probably was created by software of AI voice-cloning company ElevenLabs, according to an analysis he did with researchers at the University of Florida.
ElevenLabs, which was recently valued at $1.1 billion and raised $80 million in a funding round co-led by venture capital firm Andreessen Horowitz, allows anyone to sign up for a paid tool that lets them clone a voice from a preexisting voice sample.
ElevenLabs has been criticized by AI experts for not having enough guardrails in place to ensure it isn’t weaponized by scammers looking to swindle voters, elderly people and others.
The company suspended the account that created the Biden robocall deepfake, news reports show.
“We are dedicated to preventing the misuse of audio AI tools and take any incidents of misuse extremely seriously,” ElevenLabs CEO Mati Staniszewski said. “Whilst we cannot comment on specific incidents, we will take appropriate action when cases are reported or detected and have mechanisms in place to assist authorities or relevant parties in taking steps to address them.”
The robocall incident is also one of several episodes that underscore the need for better policies within technology companies to ensure their AI services are not used to distort elections, AI experts said.
In late January, ChatGPT creator OpenAI banned a developer from using its tools after the developer built a bot mimicking long-shot Democratic presidential candidate Dean Phillips. Phillips’s campaign had supported the bot, but after The Washington Post reported on it, OpenAI deemed that it broke rules against use of its tech for campaigns.
Experts said that technology companies have tools to regulate AI-generated content, such as watermarking audio to create a digital fingerprint or guardrails that don’t allow people to clone voices to say certain things. Companies also can join a coalition meant to prevent the spreading of misleading information online by developing technical standards that establish the origins of media content, experts said.
But Farid said it’s unlikely many tech companies will implement safeguards anytime soon, regardless of their tools’ threats to democracy.
“We have 20 years of history to explain to us that tech companies don’t want guardrails on their technologies,” he said. “It’s bad for business.”