The Oversight Board of Meta has suggested that a clearly fabricated fake Joe Biden video should not be removed from Facebook, pointing out the need for Meta to revisit and potentially broaden its somewhat restrictive policy on manipulated media. This video, which did not employ AI editing tools nor portrayed President Biden uttering statements he never made, was judged to be evidently altered and thus unlikely to deceive the audience, aligning with Meta’s existing guidelines.
What’s the story behind the fake Joe Biden video?
The controversial video, which emerged on Facebook in May 2023, originated from a clip posted the previous October. It captured a moment between the President and his granddaughter, a first-time voter, exchanging “I voted” stickers.
The original footage shows Biden affectionately placing a sticker above his granddaughter’s chest and giving her a cheek kiss. However, the fake Joe Biden video altered this innocent exchange to falsely insinuate inappropriate conduct by Biden, looping over seven seconds with an explicit lyric from “Simon Says” by Pharoahe Monch, and accompanied by a caption labeling Biden as a “sick pedophile” and criticizing his supporters.
Initially removed automatically upon its posting, the video’s takedown was appealed by the uploader, leading to a human reviewer’s assessment that it did not contravene Meta’s stipulated policies, resulting in the video’s reinstatement on Facebook.
The Oversight Board, while allowing the video to remain accessible, criticized the current guidelines for manipulated media as overly restrictive. The policies presently forbid only the alteration of videos to depict individuals saying things they never said, without addressing visual manipulations of actions, and are limited to AI-generated content. The Board highlighted the policy’s lack of clear rationale, its perplexing and ambiguous nature to users, and its failure to adequately identify and mitigate potential harms, recommending a thorough policy revision.
As Meta contemplates the Oversight Board’s recommendations, promising a public response within 60 days, the broader implications for content governance and policy adjustments ahead of the 2024 election remain uncertain.
The issue surrounding the fake Joe Biden video on Facebook has ignited a critical debate on the fine line between freedom of expression and the potential for harm in digital spaces. This incident not only showcases the complexities involved in moderating content on social media platforms but also highlights the challenges that tech companies face in addressing manipulated media. The decision by Meta’s Oversight Board to allow the video to stay, despite its deceptive alterations, underlines a pivotal moment for digital governance and the ethical considerations therein.
At the heart of this controversy is the broader question of responsibility. Social media giants like Meta are tasked with the monumental duty of safeguarding the digital public square from misinformation and harmful content, all while upholding the principles of free speech. The Oversight Board’s critique of Meta’s current manipulated media policy as “too narrow” suggests a pressing need for policies that are both comprehensive and adaptable to the evolving landscape of digital content creation and distribution.
AI-created deepfake Biden calls target voters
The nuanced nature of this specific fake Joe Biden video case raises important considerations about user discernment and the role of platforms in curating content. However, the decision also poses a risk of setting a precedent where the bar for what constitutes harmful manipulation could become ambiguously high, potentially allowing misleading content to proliferate under the guise of obvious falsity.
The Board’s call for a policy overhaul reflects a growing consensus that current guidelines are insufficient to address the sophisticated range of content manipulation technologies, including but not limited to AI. As manipulated videos become more convincing and widespread, the need for clear, effective policies that protect individuals from defamation and protect the public from misinformation becomes increasingly imperative.
As Meta reviews the Oversight Board’s recommendations, the outcome of this deliberation will undoubtedly have significant implications for content moderation practices industry-wide. The fake Joe Biden video case exemplifies the critical challenges at the intersection of technology, ethics, and governance in the digital age, urging a reevaluation of how social media platforms can better navigate the delicate balance between protecting freedom of expression and preventing harm.
Featured image credit: Markus Spiske/Unsplash