My Blog
Technology

Meta Calls for Industry Effort to Label A.I.-Generated Content

Meta Calls for Industry Effort to Label A.I.-Generated Content
Meta Calls for Industry Effort to Label A.I.-Generated Content


Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.

On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.

The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.

“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.

He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.

As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.

Meta, which owns Facebook, Instagram, WhatsApp and Messenger, is in a unique position because it is developing technology to spur wide consumer adoption of A.I. tools while being the world’s largest social network capable of distributing A.I.-generated content. Mr. Clegg said Meta’s position gave it particular insight into both the generation and distribution sides of the issue.

Meta is homing in on a series of technological specifications called the IPTC and C2PA standards. They are information that specifies whether a piece of digital media is authentic in the metadata of the content. Metadata is the underlying information embedded in digital content that gives a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos or videos.

Adobe, which makes the Photoshop editing software, and a host of other tech and media companies have spent years lobbying their peers to adopt the C2PA standard and have formed the Content Authenticity Initiative. The initiative is a partnership among dozens of companies — including The New York Times — to combat misinformation and “add a layer of tamper-evident provenance to all types of digital content, starting with photos, video and documents,” according to the initiative.

Companies that offer A.I. generation tools could add the standards into the metadata of the videos, photos or audio files they helped to create. That would signal to social networks like Facebook, Twitter and YouTube that such content was artificial when it was being uploaded to their platforms. Those companies, in turn, could add labels that noted these posts were A.I.-generated to inform users who viewed them across the social networks.

Meta and others also require users who post A.I. content to label whether they have done so when uploading it to the companies’ apps. Failing to do so results in penalties, though the companies have not detailed what those penalties may be.

Mr. Clegg also said that if the company determined that a digitally created or altered post “creates a particularly high risk of materially deceiving the public on a matter of importance,” Meta could add a more prominent label to the post to give the public more information and context concerning its provenance.

A.I. technology is advancing rapidly, which has spurred researchers to try to keep up with developing tools on how to spot fake content online. Though companies like Meta, TikTok and OpenAI have developed ways to detect such content, technologists have quickly found ways to circumvent those tools. Artificially generated video and audio have proved even more challenging to spot than A.I. photos.

(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over the use of Times articles to train artificial intelligence systems.)

“Bad actors are always going to try and circumvent any standards we create,” Mr. Clegg said. He described the technology as both a “sword and a shield” for the industry.

Part of that difficulty stems from the fragmented nature of how tech companies are approaching it. Last fall, TikTok announced a new policy that would require its users to add labels to video or photos they uploaded that were created using A.I. YouTube announced a similar initiative in November.

Meta’s new proposal would try to tie some of those efforts together. Other industry efforts, like the Partnership on A.I., have brought together dozens of companies to discuss similar solutions.

Mr. Clegg said he hoped that more companies agreed to participate in the standard, especially going into the presidential election.

“We felt particularly strong that during this election year, waiting for all the pieces of the jigsaw puzzle to fall into place before acting wouldn’t be justified,” he said.

Related posts

Save Up to 65% on Cases, Chargers and More at Best Buy’s Accessory Sale

newsconquest

‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower

newsconquest

‘Thor: Love and Thunder’ on Disney Plus: ‘Lightyear’ Tipped When It May Stream

newsconquest