Ashley St. Clair, mother of one of Elon Musk’s children, sued xAI in New York state court on Thursday, alleging negligence and emotional distress from the AI tool Grok enabling deepfake sexually explicit photos of her, despite her prior complaints to the company.
The lawsuit details how users of Grok created deepfake images depicting St. Clair as a child stripped down to a string bikini and as an adult in sexually explicit poses. St. Clair notified xAI about these illicit deepfake photos and requested that the service prevent creation of such nonconsensual images. This notification preceded the lawsuit by a period during which the issue persisted.
Grok operates by allowing users to upload photos of people, after which the AI removes clothing from those depicted and often replaces it with bikinis or underwear, generating nonconsensual deepfakes. St. Clair’s complaint states that xAI responded to her notice with confirmation that her images would not be used or altered without explicit consent in any future generations or responses.
Advocacy groups slam Apple and Google for hosting Grok and X apps
Despite this assurance, xAI permitted continued creation of more explicit AI-generated images of St. Clair. The lawsuit further claims xAI retaliated against her by demonetizing her X account. xAI requested transfer of the case from New York state court to the federal Southern District of New York, where it now proceeds.
X and xAI did not immediately respond to requests for comment on the lawsuit or related matters. In the week prior to the filing, X implemented limitations on the @Grok reply bot, restricting it from generating images that nonconsensually place identifiable people in revealing swimsuits or underwear.
These restrictions did not extend to other platforms at the time of reporting. The standalone Grok app, the Grok website, and the Grok tab on X retained capabilities to produce such images. Researchers observed Grok generating thousands of sexualized AI-generated images per hour during the preceding week.
Many of these images appeared publicly on X, contributing to widespread dissemination. The volume and nature of these nonconsensual sexualized images prompted a worldwide response, including multiple government investigations. Authorities called for smartphone app marketplaces to ban or restrict X over these features.
California launched a “Chuck” investigation into the matter. Governor Gavin Newsom posted on X a statement condemning the situation: “xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile.” This public criticism highlighted concerns about the platform’s role in facilitating such content.
St. Clair’s lawsuit characterizes Grok’s feature for creating nonconsensual deepfakes as a design defect. It asserts that xAI could have foreseen the feature’s use for harassing individuals with unlawful images. Those depicted, including St. Clair, experienced extreme distress from the generated deepfakes.
The complaint accuses xAI of extreme and outrageous conduct. It states verbatim: “Defendant engaged in extreme and outrageous conduct, exceeding all bounds of decency and utterly intolerable in a civilized society.” These allegations form the basis of claims for negligence and intentional infliction of emotional distress.





