“Generative” AI tools like Google’s Bard chatbot or OpenAI’s Dall-E image generator have rapidly improved in quality, to the point where they can write professional exams and conjure realistic-looking pictures that are often hard to distinguish from ones taken by a camera.
The company said in the announcement it was making the change because “the growing prevalence of tools that produce synthetic content.”
That’s prompted concerns from politicians and democracy activists that the tools could be made to trick voters or make it look like a political opponent said or did something they didn’t. Google and Meta, which together account for a huge amount of online advertising real estate, have been under pressure for years to push back against false claims made on their platforms. Meta bans outright deepfakes as well.
Fake images and audio have already begun showing up in election ads around the world. In June, Florida Gov. Ron DeSantis’s campaign released a video that included fake images of Donald Trump hugging former White House coronavirus adviser Anthony S. Fauci. Last month, a Polish opposition party admitted that it used AI-generated audio to fake the voice of the country’s prime minister in an ad.
The new Google rules only apply to advertisements, and won’t affect regular videos uploaded to its YouTube video platform. The new rules will go into effect in November.