The policy comes about a year after the public release of ChatGPT, an AI language model that wowed users with its ability to write memos, essays and even poetry following a user’s prompt. Meanwhile, AI-enabled image modifiers are exploding in use on social media.
The 2024 presidential election could present an early test of how platforms will police the use of AI in political advertising, given that fake social media accounts and other forms of disinformation are already widespread. The new policy also comes after the Biden administration last week announced a sweeping executive order to regulate AI.
The Meta announcement cited specific uses of AI that advertisers will have to disclose. They include ads showing an actual person saying or doing something they didn’t say or do; depicting a realistic-looking individual who doesn’t exist or a realistic-looking event that didn’t happen; or altering footage of a real event. Also barred are ads that show a “realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”
However, advertisers don’t need to disclose when ads are changed in ways that are “inconsequential” to the issue raised in the ad, such as cropping or color-correcting, Meta said.
The social media giant also promised to crack down on advertisers that fail to comply. “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser,” the company said in its unsigned blog post.