Meta has rolled out new rules to address the rising prevalence of AI-generated photos, videos, and audio across its platforms.
The updated Meta AI Content policy follows feedback from the Oversight Board and international experts, urging a shift from focusing only on deepfakes to a broader definition of manipulated media.
Now, “AI info” labels will be added not just to deepfake videos but also to a much wider range of digitally created or altered content, including anything flagged through industry-standard AI markers or user self-disclosure.
These changes are intended to provide more context for users, rather than simply removing content, and will be supplemented by more prominent warnings on high-risk posts that could mislead the public.
Meta AI Content: Less Content Removal, More Transparency and Context

Going forward, Meta will stop deleting manipulated AI content solely under its old video policy and instead focus on context and transparency.
Content will only be removed if it violates other core Community Standards such as discouraging voter interference, harassment, or promoting violence.
- Also read about: OpenAI Strikes Deal with News Corp to Power Smarter AI
Roughly 100 independent fact-checkers remain active to mark misleading AI content, including text, and posts rated as ‘false’ will be demoted in feeds.
The approach is supported by global surveys showing most users prefer clear labeling over takedowns, aligning moderation with creative freedoms and the rising use of generative AI.
More News To Read: