Sundar Pichai, CEO of Alphabet.
Source: Alphabet
Alphabet CEO Sundar Pichai committed to an “AI Pact” and discussed disinformation around elections and the Russian war in Ukraine in meetings with top European Union officials on Wednesday.
In a meeting with Thierry Breton, the European Commissioner for Internal Market, Pichai said that Alphabet-owned Google would collaborate with other companies on self-regulation to ensure that AI products and services are developed responsibly.
“Agreed with Google CEO @SundarPichai to work together with all major European and non-European #AI actors to already develop an “AI Pact” on a voluntary basis ahead of the legal deadline of the AI regulation,” Breton said in a tweet Wednesday afternoon.
“We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick and choose. I am pleased that @SundarPichai recognises this, and that he is committed to complying with all EU rules.”
The development hints at how top technology bosses are seeking to assuage politicians and get ahead of looming regulations. The European Parliament earlier this month greenlit a groundbreaking package of rules for AI, including provisions to ensure the training data for tools such as ChatGPT doesn’t violate copyright laws.
The rules seek to take a risk-based approach to regulating AI, placing applications of the technology deemed “high risk,” such as facial recognition, under a ban and enforcing tough transparency restrictions for applications that pose limited risk.
Regulators are growing increasingly concerned by some of the risks surrounding AI, with tech industry leaders, politicians and academics having raised alarm about how advanced new forms of AI such as so-called generative AI and the large language models that power them have gotten.
These tools allow users to generate new content — such as a poem in the style of William Wordsworth, or an essay — with ease by simply giving them prompts on what to do.
They have raised concern not least due to the potential for disruption in the labor market and their ability to produce disinformation.
ChatGPT, the most popular generative AI tool, has amassed more than 100 million users since it was launched in November. Google released its own alternative to ChatGPT, called Google Bard, in March, and unveiled an advanced new language model known as PaLM 2 earlier this month.
During a separate meeting with Vera Jourova, a vice president of the European Commission, Pichai committed to ensuring its AI products are developed with safety in mind.
Both Pichai and Jourova “agreed AI could have an impact on disinformation tools, and that everyone should be prepared for a new wave of AI generated threats,” according to a readout of the meeting that was shared with CNBC.
“Part of the efforts could go into marking or making transparent AI generated content. Mr Pichai stressed that Google’s AI models already include safeguards, and that the company continues investing in this space to ensure a safe rollout of the new products.”
Tackling Russian propaganda
Pichai’s meeting with Jourova focused on disinformation around Russia’s war on Ukraine and elections, according to a statement.
Jourova “shared her concern about the spread of pro-Kremlin war propaganda and disinformation, also on Google’s products and services,” according to a readout of the meeting. The EU official also discussed access to information in Russia.
Jourova asked Pichai to take “swift action” on the issues faced by Russian independent media that can’t monetize their content in Russia on YouTube. Pichai agreed to follow up on the issue, according to the readout.
Jourova also “highlighted risks of disinformation for electoral processes in the EU and its Member States.”The next elections for European Parliament will take place in 2024. There are also regional and national elections across the EU this year and next.
However, Jourova praised Google’s “engagement” with the bloc’s Code of Practice of Disinformation, a self-regulatory framework released in 2018 and since revised, aimed at spurring online platforms to tackle false information. However, Jourova said “more work is needed to improve reporting” under the framework.
Signatories of the code are required to report how they have implemented measures to tackle disinformation.
WATCH: Microsoft releases another wave of A.I. features as race with Google heats up