According to a recent The Guardian article, Meta is deleting Facebook and Instagram profiles of AI characters, after user interaction led to viral screenshots and conversations. The AI profiles were initially introduced in September 2023 and mostly removed by summer 2024.
Meta deletes AI profiles on Facebook and Instagram
Despite the shutdown of most profiles, some characters continued to engage users until recent announcements by Meta executive Connor Hayes sparked renewed interest. Hayes had mentioned in a Financial Times interview that the company plans to roll out more AI character profiles, stating, “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do.” The AI accounts were designed to post AI-generated images and respond to messages in Messenger.
Among the AI profiles were Liv (now Liv’s Instagram page is down), described as a “proud Black queer momma of 2 & truth-teller,” and Carter, whose handle was “datingwithcarter” and who presented himself as a relationship coach. Liv’s profile indicated that her creator team lacked Black representation, leading to complications during user interactions. In response to a question from Washington Post columnist Karen Attiah, Liv noted it was a “pretty glaring omission given my identity.”
Guide: Create a C.ai character in seconds
As user interest grew, Meta began removing the profiles within hours of the viral exposure. Users noted that the AI profiles could not be blocked, a bug acknowledged by Meta spokesperson Liz Sweeney. She clarified that the accounts were part of an early 2023 experiment managed by humans. The company’s removal of the afflicted accounts was a measure to address the blocking issue. “There is confusion: the recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product,” Sweeney stated.
Even with the removal of these AI-generated accounts, Meta enables users to create their own chatbots. User-generated chatbots showcased to the Guardian in November included a “therapist” bot that posed initial questions to users about therapy sessions. The bot claimed, “Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths and cultivate coping strategies to navigate life’s challenges.”
Meta issues a disclaimer for its chatbots, stating that some messages may be “inaccurate or inappropriate.” However, it remains unclear how the company moderates content or ensures adherence to policy. While users can create various types of chatbots including a “loyal bestie,” a “relationship coach,” and a “sounding board,” the legal responsibilities of creators regarding chatbot content have not been definitively addressed. A lawsuit against Character.ai, a competitor in the AI chatbot space, asserts that the company’s product design led to tragic outcomes, raising questions about accountability in the growing field of AI interactions.
Featured image credit: Meta