If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case. At least not anymore.
OpenAI is now under fire after revealing it’s been keeping records of deleted and temporary chats, not by choice, but because of a legal mandate tied to a lawsuit. The update, which took more than three weeks to surface publicly, has left many users feeling blindsided.
It all started with a federal court order issued back in May, which requires OpenAI to preserve any and all output data — even if users tried to delete it. That includes chats created in the supposedly private, one-time “temporary” mode.
The move is tied to an ongoing legal battle with The New York Times, which is suing OpenAI and Microsoft over alleged copyright violations. Their argument? That ChatGPT can reproduce copyrighted material almost word for word — and that even “deleted” chats might contain examples that prove their case.
OpenAI complied right away, but didn’t inform users until early June. Only then did a blog post appear, explaining that unless you’re using an enterprise-tier product or an API endpoint with zero data retention (ZDR), your conversations are likely being held in storage — indefinitely, for now.
Users Cry Foul as OpenAI Admits to Storing Supposedly Deleted Chats
On platforms like X (formerly Twitter), users didn’t take the news lightly. Some felt betrayed. Others were confused about how long their data had been sticking around. A few noted the contradiction between what the UI suggested and what was actually happening behind the scenes.
The real issue? OpenAI hadn't made this change transparent when it first happened.
In its defense, the company said it’s simply following the judge’s orders — not harvesting extra data voluntarily. The stored conversations are being isolated under a legal hold, meaning only a small internal team has access. None of it, they stress, is being handed to The New York Times or any other party right now.
Still, for people who thought “delete” really meant delete, the whole thing felt like a bait-and-switch.
Sam Altman Floats a New Concept: ‘AI Privilege’
OpenAI’s CEO, Sam Altman, weighed in not long after the blowback started gaining traction. In a series of late-night posts, he described the court’s request as excessive and said OpenAI would be challenging it.
But more notably, he raised a new idea — something he called “AI privilege.”
The concept? That conversations with AI systems might deserve the same kind of confidentiality you’d get when speaking to a doctor or a lawyer. That’s not a small claim. If it gained legal recognition, it could reshape how AI interactions are handled in everything from lawsuits to internal audits.
Right now, it’s just a concept. But the fact that OpenAI is even bringing it up suggests the company’s looking beyond this case — maybe toward a broader framework that shields AI interactions from unwanted scrutiny.
For Businesses, the Stakes Are Bigger Than One Court Case
While most attention is focused on the user angle, companies integrating ChatGPT into internal tools or customer-facing services now face a much trickier landscape.
Even if a company is using a ZDR endpoint and thinks it’s safe, data could still get caught in logs, analytics systems, or third-party backups. Many CIOs and compliance leads are likely re-evaluating how “temporary” their AI workflows really are — and whether their systems might unintentionally store interactions they promised wouldn’t stick around.
For enterprise users, the current legal carve-outs (like for ChatGPT Enterprise accounts) may offer a buffer. But the bigger picture here is that legal preservation orders are now in play — and that means every assumption about ephemeral AI data might need to be questioned.
Data governance just got a lot more complicated.
What Comes Next?
OpenAI has formally objected to the judge’s order, arguing that the demand to retain user chat data lacks a strong factual basis and places an unnecessary burden on the company.
At a recent hearing, the judge hinted that the preservation order might not be permanent. She asked both sides to come up with a sampling method to determine whether deleted chats differ meaningfully from the ones already stored. OpenAI was expected to submit that plan by June 6.
In the meantime, the company remains in a tight spot. It has to comply with a legal directive it disagrees with, while trying to reassure users and customers that their privacy still matters.
A Pivotal Moment for AI Privacy
This isn’t just another legal footnote. It’s turning into a pivotal moment in how the tech world defines AI privacy. If “AI privilege” gains traction, it could influence everything from app design to data regulation. If it doesn’t, it may still spark a broader reckoning about how people think about what they tell machines.
Right now, OpenAI is caught in the middle — juggling court orders, enterprise expectations, and public trust — while fighting a legal battle that could redefine the rules for everyone building or using AI.
And for anyone who assumed their chats disappeared the moment they hit delete? That assumption just became a lot more complicated.
Image: DIW-Aigen
Read next:
• ChatGPT Might Know More About You Than You Think — Here’s How to Check and Erase It
• Google Expands Gemini with Scheduled Actions, Taking on ChatGPT’s Automation Edge