The European Parliament voted on Wednesday to move forward with a draft law to govern how artificial intelligence is used in the European Union. It’s one of the first pieces of sweeping legislation focused on establishing guardrails to oversee the technology.
Called the AI Act, the draft legislation aims to protect people’s privacy, voting rights and copyrighted material. The law includes bans on using AI for discrimination and on invasive practices such as biometric identification in public spaces and “predictive policing systems” that could be used to illegally profile citizens.
Lawmakers also established a categorization system for AI risk, which classifies it as “minimal,” “limited,” “high” or “unacceptable.” High risk is deemed as tech that impacts voters during election campaigns, human health and security, or the environment. Additionally, tech companies will be required to abide by rules for transparency such as AI use disclosures and measures that prevent the creation of illegal content.
The law, once finalized, could affect how companies like Google, Meta, Microsoft and OpenAI develop new AI tools and products. Though artificial intelligence technologies have been around for years, the field has advanced rapidly and has begun to seep into everyday life.
OpenAI’s chatbot ChatGPT went viral after its November 2022 release and amassed 100 million active users by January. The generative AI tool can respond to questions, draft poetry and dish out advice on anything from fitness regimens to event planning. This spurred other companies to follow suit, ushering in a flood of new generative AI tools and products.
Microsoft launched Bing Chat in February using OpenAI’s GPT-4 tech, while Google rolled out its Bard chatbot and introduced an experimental AI-powered search engine called Search Generative Experience. In April, Amazon announced its Bedrock tool, which is used to build AI apps for Amazon Web Services.
Increased calls for AI regulations
EU member countries will begin negotiations on the AI Act and a finalized law is expected early next year. The law could influence how policymakers in the US and other countries create their own regulatory systems.
“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Italian lawmaker Brando Benifei, who helped lead work on the AI ACT, said on Wednesday.
Last month, OpenAI CEO Sam Altman testified during a Senate hearing on artificial intelligence and agreed that some sort of government regulation is needed in order to mitigate the risks of AI, a sentiment echoed by many other technology and AI experts.
Watch this: ChatGPT Creator Testifies Before Congress On AI Safety and Regulation
Senate leaders have reportedly said bipartisan efforts to craft a comprehensive AI framework are still months away, though some lawmakers have begun to tackle parts of the technology. On Wednesday, Sens. Josh Hawley and Richard Blumenthal introduced a bill which stipulates that Section 230, a law that shields internet companies from content posted by users, doesn’t protect AI-generated content, according to Axios. The Senate Human Rights Subcommittee also this week held a hearing on the impact AI could have on human rights.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.