Meta said it is working with other companies to develop common standards for identifying the use of AI to help the platform label images that come from other sources, such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.
The images will be detected by including metadata in images created with certain AI tools.
“We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app. We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” Meta’s president of global affairs Nick Clegg said in the blog post.
Those labels will be applied across Meta’s platforms, which also include Threads, the text-based platform that launched in July.
Clegg said similar labeling efforts for AI tools that
generate audio and video at the same scale have not yet begun.
As the industry “works toward this capability,” Clegg said Meta will add a feature for people to disclose when they share AI-generated video or audio so the company can add a label to it.
For AI-generated or altered images, video or audio that “creates a particularly high risk of materially deceiving the public on a matter of importance,” Meta may add a “more prominent label if appropriate” to provide users with additional information and context, he said.
Read more in a report at TheHill.com.