
Let’s talk about privacy—specifically, the kind you thought you had when you hit “delete.”
OpenAI received a court order to retain every single ChatGPT conversation, even the ones you erased. Yep. Even the awkward ones. Even the ones that start with, “Hypothetically, if I were to…”
Why? Because The New York Times is suing them over copyright, and now everyone’s deleted chats are potential evidence. If you’re not on an enterprise plan or haven’t negotiated a special agreement (because you’re, you know, not JPMorgan), your logs are staying right where they are: on OpenAI’s servers.
This isn’t just about one lawsuit. This is about the growing mismatch between how we talk about privacy and how it’s actually handled in practice.
The myth of “Zero Retention”
Every AI company has some version of this sentiment:
“We care deeply about your privacy.”
Sure, you do—until a lawyer shows up. Then, the logs you “deleted” become “temporarily archived for compliance purposes”— forever.
Let me translate:
- If you’re a normal user, your chats are saved.
- If you’re a legal liability, your chats are saved.
- If you’re a data point that helps them improve the model? You bet your tokens they’re saved.
Unless you have a legal agreement that says otherwise, assume everything you say to an LLM could come back to haunt you in court or otherwise.
Why this is a security nightmare
This court order turns OpenAI into a centralized honeypot of personally identifiable information (PII), trade secrets, and dumb things people typed at 2 AM, thinking it would all disappear. It won’t.
You now have:
- A massive, queryable database of sensitive prompts.
- Retention policies determined by legal discovery, not risk.
- A user base that still thinks they’re safe after clicking “delete.”
And worse: the bad actors know it.
What happens when AI meets legal
The security problem isn’t just technical. It’s legal, procedural, and architectural. This ruling sets a precedent: courts can force AI companies to violate their own privacy policies, and users will never know unless they’re reading very closely.
Forget “shift left.” You now need to shift legal. Your security policy should explicitly address what happens when a court order conflicts with your data retention promises. Because it will.
What you should do
If you run a company using AI:
- Scrub your prompts before they go out. Don’t assume OpenAI will do it for you.
- Negotiate zero-retention contracts. Or build your own local LLM.
- Educate your employees that “ChatGPT is not your friend. It’s a deposition waiting to happen.”
If you’re building AI:
- Build ephemeral memory by default. Or don’t pretend you care about privacy.
- Make retention optional, not assumed. And never tie it to billing tiers.
And for the love of security: stop letting your legal team write your product trust docs.
Final thought
The future isn’t private by default. It’s private by negotiation. This case just proved it.
And if your AI stack is built on trust, maybe start by asking: who actually controls the logs?
Because “delete” doesn’t mean what you think it means anymore.
At Sumo Logic, we always say to keep all your logs for investigations, but we didn’t mean like this. You still need to have proper visibility, monitoring, and security for your logs. Your next steps are getting logging on these AI tools and building alerts for people doing silly things that can quickly become security incidents.
Your data isn’t gone. So be sure to monitor it with Sumo Logic.