Meta Rolling Out Label for AI-Generated Images
Meta has announced it will be rolling out changes to help Instagram, Facebook, and Threads users easily identify when images are AI generated.
Over the next few months, Meta will add a label to let users know when images created by third-party platforms are “Imagined with AI.” The practice is already in place for images created with Meta AI images. Meta aims to leverage markers embedded in the images to identify AI generated content. It will roll out these changes in all languages available on each of the three apps.
Meta says it is working with other companies to create standards that will allow platforms to identify if images are AI generated. There are several ways that companies can flag an image, such as visible identifiers, invisible watermarks, or metadata embedded in the file.
Meta already uses both invisible techniques to allow others to identify Meta AI images. Other companies, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock are also implementing similar standards.
Meta is following the best practices outlined by Partnership on AI (PAI), an independent, nonprofit organization that aims to build “a future where Artificial Intelligence empowers humanity by contributing to a more just, equitable, and prosperous world.”
While this is a great step forward, it is only a first step. Audio and video content will require similar standards. GenAI images where the creators do not use industry standard markers or intentionally strip out the markers will also need more developed methods for labeling.
While the technology continues to evolve, Meta is also working on policy. It already requires advertisers to disclose when they use digitally created or altered content and it is taking its policy further by asking all users to disclose and label their AI generated content, whether an image, video, or audio recording.
Without providing specific details, Meta says it will also “add a more prominent label” when the content is “particularly high risk of materially deceiving the public on a matter of importance” and may apply penalties to users who fail to disclose content accurately.
Nick Clegg, President, Global Affairs at Meta, says Meta is implementing these protocols ahead of upcoming elections and will continue to “learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”