OpenAI is engaged in a legal battle over a U.S. court order compelling it to indefinitely retain 20 million randomly sampled ChatGPT conversations as part of its copyright infringement lawsuit with The New York Times.
The preservation order, issued on May 13 and affirmed by District Judge Sidney Stein on June 26, forces OpenAI to hold user data indefinitely, a move that directly conflicts with the company’s standard 30-day deletion policy for unsaved chats. The order affects data from December 2022 through November 2024 for ChatGPT Free, Plus, Pro, and Team subscribers, as well as API customers without Zero Data Retention agreements. Enterprise, Edu, and ZDR customers are excluded.
OpenAI has stated it implemented restricted access protocols for the preserved data, limiting access to a small legal and security team, and affirmed the data will not be used for training or turned over to external parties at this time.
The New York Times filed the lawsuit in December 2023, alleging OpenAI illegally used millions of its articles for training models. The lawsuit seeks the destruction of all models trained on its works and potential billions in damages.
OpenAI argued that the order presents significant engineering challenges and conflicts with international data protection regulations like GDPR. Judge Stein rejected these arguments, emphasizing that OpenAI’s terms of service allow data preservation for legal requirements.
A recent modification to the order on September 26, 2025, provided limited relief, ending the requirement for OpenAI to preserve all new chat logs moving forward. However, the company must retain the data already saved and any information from ChatGPT accounts flagged by The New York Times.
Security practitioners warn the case shatters assumptions about data deletion in AI interactions. OpenAI CEO Sam Altman suggested the situation accelerates the need for an “AI privilege” concept, similar to attorney-client privilege. The litigation also raises concerns for enterprise users regarding compliance with regulations like HIPAA and GDPR.
In a statement regarding security, OpenAI CISO Dane Stuckey said, “Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers.”





