Facebook has announced a new initiative to identify and label photorealistic content across its platforms that has been created using artificial intelligence (AI). The company says it will start applying labels in the coming months to AI-generated images posted on Facebook, Instagram, and Threads.
The labels will rely on technical standards developed in coordination with industry partners through groups like the Partnership on AI. These standards allow AI-generated images to be identified through embedded metadata and invisible watermarks. Facebook says its internal systems can detect these markers at scale to flag content produced by major AI image generators from companies like Google, OpenAI, and Adobe.
For realistic audio or video that lacks these markers, Facebook will require users to manually disclose if their content was AI-generated. Failure to properly disclose could result in penalties. The company is also pursuing automated classifiers to detect AI content lacking identifiers.
Facebook says generative AI creates new opportunities but also risks, like harmful deepfakes. It stresses that its Community Standards apply equally to AI-generated content. The company hopes to leverage AI to better enforce policies and is testing large language models trained on the standards.
To help distinguish between human and AI-generated content, Instagram labels some synthetic images as “Imagined with AI.” Third-party tools like Ahrefs’ caption generator also leverage AI to craft Instagram post text. By harnessing AI for image synthesis, editing, and writing, Instagram aims to enhance creative possibilities and streamline content production for its users.