Meta’s Oversight Board slams company policies for manipulated media

The ruling agreed with Meta’s decision to leave up videos that were misleadingly edited to label Biden as a “sick pedophile,” but said the platform must update its policies to capture audio and non-AI generated media.
The Meta logo marks the entrance of their corporate headquarters in Menlo Park, Calif., on Nov. 9, 2022. (Photo by JOSH EDELSON/AFP via Getty Images)

Meta’s quasi-independent Oversight Board sharply rebuked the company on Monday for only applying its rules for manipulated media to AI-generated material, arguing that policy is “incoherent” and poses a serious risk ahead of a string of elections around the world in 2024. 

The board described Meta’s policy as “lacking in persuasive justification” and “incoherent and confusing to users” as part of a ruling to leave up a manipulated video that claimed President Joe Biden was a pedophile.  

The board concluded that the company had been correct in allowing that video to remain on Meta’s platforms but said its broader rules around manipulated media fail to treat so-called “cheap fakes” — which alter videos without the use of AI technology — with the same level of seriousness as AI-manipulated content.

The board urged the platform to cast a wider net ahead of historic global elections this year that experts warn could be influenced by manipulated media of all sorts.


The board’s decision focuses on a number of videos based on actual footage that appeared on the platform last year that showed Biden voting in the 2022 midterm elections. In the videos, Biden places an “I Voted” sticker above his granddaughter’s chest after she requested that he do so, but that short snippet was edited to loop over and over to make it appear as if he was groping or inappropriately touching her chest, while an overlaying caption labeled him as a “sick pedophile.”

After the video was reported to Meta by users, a human review determined it did not violate the company’s policies around manipulated content, because it was not generated using AI and the editing was deemed to be obvious enough that it would not fool an “average” user. 

“Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content,” the board wrote. “Therefore, the policy should not treat ‘deep fakes’ differently to content altered in other ways (for example, ‘cheap fakes’).”

Currently, the company only considers video that is manipulated through AI-based tools, as well as media depicting people saying things they haven’t said, as violations. It does not cover manipulated audio or media depicting people doing things they never did.

The board called on Meta’s moderation to cover a wider range of media, including audiovisual content and media that generally “shows people doing things they did not do.” They also urged Meta to more clearly specify what harms — like disrupting the electoral process — they are seeking to prevent with their policies.


However, rather than removing or taking down such content, the board recommended the company consider labeling them as being significantly altered in order to give users context and avoid “disproportionate” restrictions on freedom of expression, such as satire.  

As for how Meta determines whether a video is likely to fool an “average” user, company officials told the board it considers a number of factors, such as whether it’s clear that edits have been made, unnatural facial movements or odd pixelation, mouth movements that are out of sync with the video and whether the clip is labeled to disclose manipulation through AI or machine learning tools.

The ruling highlights how even as policymakers and social media platforms fret about the impact from emerging technologies like AI, the online information ecosystem remains awash in manipulated media that does not bother with such tools, instead relying on lower technical approaches like editing and removing legitimate pieces of videos out of their original context.

Eddie Perez, a former director for civic integrity at Twitter and a board member at the nonprofit OSET Institute, told CyberScoop that the decision to concur with Meta’s initial ruling of a policy that “the board itself disparages for being poorly written and inadequate” sends a mixed message around how such content should be handled by social media platforms.

He also called Meta’s differentiation between videos that are manipulated through AI and those that aren’t “wacky.”


“That does seem strange to me and that does seem like a good example of how there’s an overly focused emphasis on AI” when it comes to disinformation, Perez said. “It’s strange to me that Meta is saying the only kind of misleading content we’re going to talk about is if it’s AI-generated.”

The board received 49 public comments about the video, with user sentiments ranging from demands that the content be removed as fraudulent disinformation to claims that taking down such videos would represent an act of censorship and overreach by Meta.

It also included comments submitted by digital rights nonprofits, think tanks and Rep. Adam Schiff, D-Calif., repeating arguments from a letter sent to Meta’s oversight board last year claiming the refusal to remove the video “encourages” bad actors to leverage cheap fakes in the 2024 election.

In another comment, David Inserra and Jennifer Huddleston of the Cato Institute warned that a policy change to include non-AI edited videos under Meta’s manipulated content policy “could significantly harm both political and non-political expression, be [abused] by those with more resources and internet trolls, present a problem that would be impossible to handle at scale” and undermine faith in the fairness of Meta’s approach to content moderation.

The pair recommended that rather than including such content under its takedown policies, Meta should adopt an approach that labels and provides context around such videos generated by third parties, similar to the “community notes” feature offered by platforms like Twitter/X.


In a statement sent to CyberScoop after the ruling, Inserra said he was pleased to see the board recommend content labeling in lieu of takedowns, but also warned that “applying labels too broadly may fatigue users.”

This story was updated Feb. 5, 2024, with a comment from the Cato Institute.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts