Meta, formerly known as Facebook, has unveiled changes to its policies regarding AI-generated content and manipulated media following criticism from its Oversight Board. Beginning next month, Meta will introduce broader labeling for such content, including the addition of a “Made with AI” badge for deepfakes. Furthermore, additional contextual information will accompany content that has been manipulated in ways posing significant risks of misleading the public on critical matters.
This shift is expected to result in the social media giant labeling more potentially misleading content, a crucial step in a year marked by numerous elections globally. However, when it comes to deepfakes, Meta will only apply labels to content that exhibits “industry standard AI image indicators” or when the uploader discloses its AI-generated nature.
Content falling outside these parameters may remain unlabeled, under Meta’s new policy. The company aims to prioritize transparency and context over removal, citing concerns about potential risks to free speech associated with removing manipulated media.
Consequently, Meta plans to cease removing content solely based on its current manipulated video policy by July. This timeline allows users to familiarize themselves with the self-disclosure process before Meta discontinues the removal of a smaller subset of manipulated media.
This policy adjustment may be a response to increasing legal pressures on Meta regarding content moderation and systemic risks, including regulations like the European Union’s Digital Services Act. The EU’s rules impose obligations on major social networks to balance content moderation with protecting free speech, especially in the context of elections.
Meta’s Oversight Board, which provides policy recommendations to the company, criticized its previous approach to AI-generated content. In response, Meta agreed to amend its policies, acknowledging the need for a broader scope beyond videos altered by AI.
The decision follows the Board’s review of a doctored video involving President Biden, which prompted calls for a reevaluation of Meta’s policies. While Meta left the specific content intact, it faced criticism for its narrow definition of manipulated media, which focused solely on AI-generated videos.
To address these concerns, Meta is expanding its labeling efforts, particularly for synthetic media. The company will leverage industry-shared signals of AI-generated content and user self-disclosures to apply “Made with AI” labels. Additionally, Meta may add more prominent labels to content posing a high risk of materially deceiving the public.
Despite these changes, Meta remains committed to removing manipulated content only if it violates other Community Standards policies. The company will work with independent fact-checkers to identify and address risks related to manipulated content, reducing its reach and adding informational labels when necessary.
As synthetic content continues to proliferate, Meta’s revised approach aims to provide users with better information and context to evaluate the content they encounter on its platforms.