Last week, the Pew Research Center released a survey in which a majority of Americans — 52 percent — said they feel more concerned than excited about the increased use of artificial intelligence, including worries about personal privacy and human control over the new technologies.
The proliferation this year of generative AI models such as ChatGPT, Bard and Bing, all of which are available to the public, brought artificial intelligence to the forefront. Now, governments from China to Brazil to Israel are also trying to figure out how to harness AI’s transformative power, while reining in its worst excesses and drafting rules for its use in everyday life.
Some countries, including Israel and Japan, have responded to its lightning-fast growth by clarifying existing data, privacy and copyright protections — in both cases clearing the way for copyrighted content to be used to train AI. Others, such as the United Arab Emirates, have issued vague and sweeping proclamations around AI strategy, or launched working groups on AI best practices, and published draft legislation for public review and deliberation.
Others still have taken a wait-and-see approach, even as industry leaders, including OpenAI, the creator of viral chatbot ChatGPT, have urged international cooperation around regulation and inspection. In a statement in May, the company’s CEO and its two co-founders warned against the “possibility of existential risk” associated with superintelligence, a hypothetical entity whose intellect would exceed human cognitive performance.
“Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” the statement said.
Still, there are few concrete laws around the world that specifically target AI regulation. Here are some of the ways in which lawmakers in various countries are attempting to address the questions surrounding its use.
Brazil has a draft AI law that is the culmination of three years of proposed (and stalled) bills on the subject. The document — which was released late last year as part of a 900-page Senate committee report on AI — meticulously outlines the rights of users interacting with AI systems and provides guidelines for categorizing different types of AI based on the risk they pose to society.
The law’s focus on users’ rights puts the onus on AI providers to provide information about their AI products to users. Users have a right to know they’re interacting with an AI — but also a right to an explanation about how an AI made a certain decision or recommendation. Users can also contest AI decisions or demand human intervention, particularly if the AI decision is likely to have a significant impact on the user, such as systems that have to do with self-driving cars, hiring, credit evaluation or biometric identification.
AI developers are also required to conduct risk assessments before bringing an AI product to market. The highest risk classification refers to any AI systems that deploy “subliminal” techniques or exploit users in ways that are harmful to their health or safety; these are prohibited outright. The draft AI law also outlines possible “high-risk” AI implementations, including AI used in health care, biometric identification and credit scoring, among other applications. Risk assessments for “high-risk” AI products are to be publicized in a government database.
All AI developers are liable for damage caused by their AI systems, though developers of high-risk products are held to an even higher standard of liability.
China has published a draft regulation for generative AI and is seeking public input on the new rules. Unlike most other countries, though, China’s draft notes that generative AI must reflect “Socialist Core Values.”
In its current iteration, the draft regulations say developers “bear responsibility” for the output created by their AI, according to a translation of the document by Stanford University’s DigiChina Project. There are also restrictions on sourcing training data; developers are legally liable if their training data infringes on someone else’s intellectual property. The regulation also stipulates that AI services must be designed to generate only “true and accurate” content.
These proposed rules build on existing legislation relating to deepfakes, recommendation algorithms and data security, giving China a leg up over other countries drafting new laws from scratch. The country’s internet regulator also announced restrictions on facial recognition technology in August.
China has set dramatic goals for its tech and AI industries: In the “Next Generation Artificial Intelligence Development Plan,” an ambitious 2017 document published by the Chinese government, the authors write that by 2030, “China’s AI theories, technologies, and applications should achieve world-leading levels.”
In June, the European Parliament voted to approve what it has called “the AI Act.” Similar to Brazil’s draft legislation, the AI Act categorizes AI in three ways: as unacceptable, high and limited risk.
AI systems deemed unacceptable are those which are considered a “threat” to society. (The European Parliament offers “voice-activated toys that encourage dangerous behaviour in children” as one example.) These kinds of systems are banned under the AI Act. High-risk AI needs to be approved by European officials before going to market, and also throughout the product’s life cycle. These include AI products that relate to law enforcement, border management and employment screening, among others.
AI systems deemed to be a limited risk must be appropriately labeled to users to make informed decisions about their interactions with the AI. Otherwise, these products mostly avoid regulatory scrutiny.
The Act still needs to be approved by the European Council, though parliamentary lawmakers hope that process concludes later this year.
In 2022, Israel’s Ministry of Innovation, Science and Technology published a draft policy on AI regulation. The document’s authors describe it as a “moral and business-oriented compass for any company, organization or government body involved in the field of artificial intelligence,” and emphasize its focus on “responsible innovation.”
Israel’s draft policy says the development and use of AI should respect “the rule of law, fundamental rights and public interests and, in particular, [maintain] human dignity and privacy.” Elsewhere, vaguely, it states that “reasonable measures must be taken in accordance with accepted professional concepts” to ensure AI products are safe to use.
More broadly, the draft policy encourages self-regulation and a “soft” approach to government intervention in AI development. Instead of proposing uniform, industry-wide legislation, the document encourages sector-specific regulators to consider highly-tailored interventions when appropriate, and for the government to attempt compatibility with global AI best practices.
In March, Italy briefly banned ChatGPT, citing concerns about how — and how much — user data was being collected by the chatbot.
Since then, Italy has allocated approximately $33 million to support workers at risk of being left behind by digital transformation — including but not limited to AI. About one-third of that sum will be used to train workers whose jobs may become obsolete due to automation. The remaining funds will be directed toward teaching unemployed or economically inactive people digital skills, in hopes of spurring their entry into the job market.
Japan, like Israel, has adopted a “soft law” approach to AI regulation: the country has no prescriptive regulations governing specific ways AI can and can’t be used. Instead, Japan has opted to wait and see how AI develops, citing a desire to avoid stifling innovation.
For now, AI developers in Japan have had to rely on adjacent laws — such as those relating to data protection — to serve as guidelines. For example, in 2018, Japanese lawmakers revised the country’s Copyright Act, allowing for copyrighted content to be used for data analysis. Since then, lawmakers have clarified that the revision also applies to AI training data, clearing a path for AI companies to train their algorithms on other companies’ intellectual property. (Israel has taken the same approach.)
Regulation isn’t at the forefront of every country’s approach to AI.
In the United Arab Emirates’ National Strategy for Artificial Intelligence, for example, the country’s regulatory ambitions are granted just a few paragraphs. In sum, an Artificial Intelligence and Blockchain Council will “review national approaches to issues such as data management, ethics and cybersecurity,” and observe and integrate global best practices on AI.
The rest of the 46-page document is devoted to encouraging AI development in the UAE by attracting AI talent and integrating the technology into key sectors such as energy, tourism and health care. This strategy, the document’s executive summary boasts, aligns with the UAE’s efforts to become “the best country in the world by 2071.”