The Italian data authority cited a March data leak, which the company said allowed some users to see information about other users’ chat history and some users’ payment information. OpenAI announced last week that it had patched the bug. European regulators regularly investigate American tech companies for data breaches and privacy lapses, but it is rare for a Western government to take the step of banning an app.
The regulator cited broader concerns about OpenAI’s data-collection practices, which could affect a host of companies that build systems by vacuuming up massive volumes of data, often scraped from the internet. The Italian agency, which enforces both E.U. and domestic data protection rules, said in a news release “there was no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
The ban signals the regulatory and compliance challenges ahead for OpenAI, as ChatGPT’s ability to hold remarkably humanlike conversations on complex topics, draft emails and generate articles has stunned the public. Its responses rely on a wide range of sources from across the internet — and are not always accurate or age-appropriate — sparking concerns about the technology’s ability to amplify misinformation, harm children and encroach on users’ privacy.
The action in Italy is a signal that European regulators may be more aggressive in attempting to regulate the future of artificial intelligence than their U.S. counterparts. The E.U. has stricter privacy regulations than the United States, which lacks a comprehensive federal consumer privacy law. The bloc is also expected to resume negotiations this year on new artificial intelligence regulation.
Margrethe Vestager, the executive vice president of the European Commission, said in a Thursday tweet that regulators need to “advance freedoms & protect our rights” as technology evolves.
The Italian regulator’s announcement highlighted the stakes for OpenAI: If the start-up does not respond to the agency within 20 days, it could face a fine of about $21 million or 4 percent of its annual revenue.
Earlier this year, the Italian authority imposed a similar ban on Replika, another AI-based app that generates a “virtual friend.” The agency said it carried out tests that showed the app served replies to children that are “absolutely inappropriate to their age.”
American regulators are increasingly under pressure to take action against chatbots as they grow in popularity. On Thursday, a think tank submitted a complaint to the Federal Trade Commission asking the agency to probe the privacy and public safety risks associated with ChatGPT. The organization, the Center for AI and Digital Policy, called for the agency to enjoin “further commercial releases of GPT-4,” the latest iteration of the chatbot technology.
The FTC has signaled an increasing focus on artificial intelligence. In an advisory last month, the agency asked companies to “keep your AI claims in check,” warning businesses not to falsely exaggerate what such products can do and evaluate risks before pushing products. FTC Chair Lina Khan (D) said at an antitrust conference this week that her agency is working to protect competition in the growing artificial intelligence market.
The release of ChatGPT set off a race among competitors to develop AI of similar sophistication: Microsoft last month made a new AI chatbot powered by the same technology open to journalists, some of whom reported bizarre and troubling interactions.
Italy’s action is likely to remain an outlier, as both U.S. and European governments have been historically slow to regulate new, emerging technologies, said Ron Moscona, a London-based partner at the law firm Dorsey & Whitney.
“Legislatures and regulators take many years to understand the issues arising from digital technologies and to develop the legislation to address them,” Moscona said in an emailed statement. “Unfortunately, it is unlikely that legislatures in the West will be able to come up with a comprehensive set of policies to deal with the challenges raised by chatbots any time soon and it is unlikely that the industry will stop the spread of those tools.”
Benjamin Soloway, Pranshu Verma and Rachel Lerman contributed to this report.