A Doctored Biden Video Is a Test Case for Facebook’s Deepfake Policies

Learn about the controversy surrounding a doctored video of President Biden on Facebook and the platform's deepfake policies.

In May, a manipulated video of President Joe Biden circulated on Facebook, sparking a debate about the platform’s policies on deepfake content. The video, which showed Biden placing an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek during the midterm elections, was doctored to make it appear as though he was repeatedly touching the girl inappropriately. Despite concerns raised by the public and experts, Meta, the parent company of Facebook, decided to leave the video up. Now, Meta’s Oversight Board is reviewing this decision as it seeks to address how the company will handle manipulated media and election disinformation leading up to the 2024 US presidential election.


Facebook has come under scrutiny for its handling of a manipulated video featuring President Joe Biden. The video, which was initially posted in May, showed President Biden placing an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek. However, the doctored version of the video made it appear as though he was repeatedly touching the girl inappropriately, accompanied by a caption labeling him a “pedophile.” Despite this manipulation, Facebook’s parent company, Meta, chose not to remove the video.

This decision has prompted Meta’s Oversight Board to review the case, with the aim of urging Meta to address how it plans to handle manipulated media and election disinformation leading up to the 2024 US presidential election and other global elections. The Board acknowledges the importance of safeguarding the integrity of elections and believes that platforms like Facebook must be equipped to deal with the challenges posed by advances in artificial intelligence (AI).

Meta’s Policies

Meta, in a blog post, stated that it did not consider the manipulated video to violate its hate speech, harassment, or manipulated media policies. According to Meta’s manipulated media policy, a video is only removed if it has been edited or synthesized in a way that is not apparent to an average person and would likely mislead viewers into believing that the subject of the video said something they did not say. Additionally, Meta clarified that the Biden video was not manipulated using AI or machine learning techniques.

Harms of Generative AI

Experts have raised concerns about the use of generative AI in the context of the 2024 elections. Generative AI enables the creation of more realistic and convincing fake audio, video, and imagery. While Meta and other tech companies have committed to mitigating the harms of generative AI, current strategies such as watermarking content have only had limited effectiveness. Recently, in Slovakia, a fake audio recording circulated on Facebook, featuring a prominent politician discussing election rigging. The creators were able to exploit a loophole in Meta’s manipulated media policies, which do not cover faked audio.

Examining Meta’s Policies

In light of the manipulated Biden video, Meta’s Oversight Board has sought public comments on the case, particularly focusing on the role of AI in manipulating and generating content. The Board intends to analyze Meta’s policies concerning manipulated videos more closely. While this case provides an opportunity to delve into Meta’s policies, there may still be lingering questions regarding the handling of manipulated media that might not be fully answered.

Power of the Oversight Board

Meta’s Oversight Board has the authority to issue binding decisions and recommendations. These decisions and recommendations can significantly influence Meta’s approach to the 2024 elections. The Board has previously reviewed cases related to pre-election violence and post-election insurrections in countries like Cambodia and Brazil. However, the Board’s ability to dictate Meta’s resource allocation on issues like disinformation and hate speech, particularly outside the US, remains a concern. Meta’s track record in moderating non-English content and addressing global contexts has been questionable.

Tools and Context

Unlike Google, which has introduced features to help users identify AI-generated or manipulated images, Meta has not developed consumer-facing tools to enhance users’ understanding of the content they encounter. While the Oversight Board may hope to establish guidelines for Meta on AI-generated and AI-manipulated content based on the Biden video case, lingering questions about addressing manipulated media may persist.

Additionally, the article raises the issue of manipulation in non-English contexts, highlighting Meta’s struggles in moderating content that is not in English and its historical difficulties in dealing with disinformation and hate speech outside the US.

Anticipated Outcomes

The Oversight Board’s review of Meta’s policies, prompted by the manipulation of the Biden video, aims to provide a clear assessment of how Meta’s policies are functioning, both domestically and globally. The Board’s conclusions and recommendations have the potential to impact Meta’s approach to handling manipulated media and disinformation on a global scale. However, it is important to acknowledge that addressing manipulated media effectively may have its limitations.

Related Articles

To provide further context and insights, the article lists several related publications:

  1. Slovakia’s Election Deepfakes and AI dangers to democracy
  2. Generative AI exacerbating trust issues in the US Congress
  3. Debunking suspicious audio recordings using deepfake audio
  4. Senate meeting on the civilizational risks of generative AI
  5. Generative AI use by teachers for lesson plans
  6. iPhone 15 focusing on intuitive AI instead of generative AI
  7. AI chatbots for automating mundane tasks
  8. FBI’s use of face recognition without proper training

About the Author

Vittoria Elliott is a Platforms and Power reporter for WIRED. Her previous experience includes reporting on disinformation and labor in markets outside the US and Western Europe for Rest of World. Elliott has also worked with The New Humanitarian, Al Jazeera, and ProPublica. She brings a wealth of knowledge and expertise to her reporting.


The article concludes by asserting the importance of sources like WIRED as providers of reliable information and ideas. Technology, such as AI, has the potential for transformative impacts, but it must be harnessed responsibly. Additionally, the article emphasizes the significance of copyright and the proper use of material in the context of technological advancements.

Source: https://www.wired.com/story/a-doctored-biden-video-is-a-test-case-for-facebooks-deepfake-policies/

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.