Instead, the document amounts to a manifesto stating that AI-generated content, much of which is created by the companies’ tools and posted on their platforms, does present risks to fair elections, and it outlines steps to try to mitigate that risk, like labeling suspected AI content and educating the public on the dangers of AI.
“The intentional and undisclosed generation and distribution of deceptive AI election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the agreement reads.
AI-generated images, or “deepfakes,” have been around for several years. But in the past year, they have rapidly improved in quality, to the point where some fake videos, images and audio recordings are difficult to distinguish from real ones. The tools to make them are also now widely available, making their production much easier.
AI-generated content has already cropped up in election campaigns around the world. Last year, an ad in support of former Republican presidential candidate Ron DeSantis used AI to mimic the voice of former president Donald Trump. In Pakistan, presidential candidate Imran Khan used AI to make speeches — from jail. In January, a robocall purporting to be President Biden encouraged people not to vote in the New Hampshire primary. The calls used an AI-generated version of Biden’s voice.
Tech companies have been under pressure from regulators, AI researchers and political activists to rein in the spread of fake election content. The new agreement is similar to a voluntary pledge the same companies, plus several others, signed in July after a meeting at the White House, where they committed to try to identify and label fake AI content on their sites. In the new accord, the companies also commit to educating users on deceptive AI content and being transparent about their efforts to identify deepfakes.
The tech companies also already have their own policies on political AI-generated content. TikTok doesn’t allow fake AI content of public figures when it is being used for political or commercial endorsements. Meta, the parent company of Facebook and Instagram, requires political advertisers to disclose whether they use AI in ads on its platforms. YouTube requires creators to label AI-generated content that looks realistic when they post it on the Google-owned video site.
Still, attempts to build a broad system in which AI content is identified and labeled across social media have yet to come to fruition. Google has shown off “watermarking” technology but doesn’t require its customers to use it. Adobe, the owner of Photoshop, has positioned itself as a leader in reining in AI content, but its own stock photo website was recently full of fake images of the war in Gaza.