The expansion follows criticism from Meta’s Oversight Board, which called the company’s existing policies “incoherent” in February after an altered video of President Biden remained on Facebook because it didn’t violate Meta’s rules. The Oversight Board, an outside group funded by Meta, called on the company to extend the policy to address altered audio as well as videos when they falsely depict people doing things.
“We agree with the Oversight Board’s argument that our existing approach is too narrow” because it only applies to fake speech and not altered actions, Meta Vice President of Content Policy Monika Bickert said in a statement.
“Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos.”
Following the Oversight Board’s recommendation, Meta also agreed to no longer remove digitally created media if it doesn’t violate any other rules, but the company will attach a label saying that the content has been altered. Starting next month, the company will start to apply “Made with AI” labels on content it determines is AI or when people disclose they are uploading AI-generated content.
In February, Meta unveiled plans to develop a system to identify AI-generated content that users create using services from other tech companies which have agreed to embed an AI identifier or a watermark.
Meta’s AI policy expansion will probably be a welcome development to civil society groups and experts who have been warning that AI-generated misinformation is already proliferating online during a pivotal election year. But experts have also warned that the labeling strategy may fall short of catching all of the misleading AI-generated content. While some companies, including Meta, have agreed to ensure they add watermarks to AI generated content, other services have not.