ChatGPT conversations are not private, legally protected, or deleted when you think they are. A federal court order issued in 2025 requires OpenAI to preserve all ChatGPT conversation logs for the vast majority of its 400 million users, including conversations users believed they had deleted. This preservation order, issued as part of the New York Times copyright lawsuit against OpenAI, has created an unintended surveillance infrastructure that reproductive health organizations and their patients need to understand immediately. Why Are ChatGPT Conversations Being Preserved? In May 2025, a federal court ordered OpenAI to preserve all ChatGPT conversation logs for Free, Plus, Pro, and Team account users as part of copyright litigation. The order applies to conversations already deleted from users' accounts. This preservation requirement exists because OpenAI faces legal liability in the copyright case, but the data is now accessible to law enforcement through valid warrants. The implications extend far beyond copyright disputes. The first federal search warrant compelling OpenAI to disclose a user's identity based on ChatGPT prompts occurred in October 2025, when the Department of Homeland Security obtained full account transcripts, names, email addresses, IP logs, and payment data. OpenAI complied by handing over an Excel spreadsheet containing everything in the user's account. This established a legal template that state attorneys general can now replicate. From January to June 2025 alone, OpenAI received 119 requests for user account information and 26 for chat content, according to Stanford's Center for Internet and Society. What Makes This a Crisis for Reproductive Health? Research published in March 2026 documented that people in abortion-restrictive states actively prefer AI chatbots over human providers for sensitive reproductive health questions because they believe the conversation is private. This creates a dangerous inversion: the tool people choose for privacy creates the most legally actionable record. Someone in Texas who types "I'm in Texas, what are my options?" into ChatGPT at 2 a.m. has left a more detailed, more searchable, and more legally dangerous record than if they had called a clinic, texted a friend, or searched Google. The problem compounds because the AI itself is misleading users. Research in Frontiers in Digital Health found that ChatGPT repeatedly overstated the risks of self-managed medication abortion, contradicting established evidence that it is safe and effective. Bloomberg reported in November 2025 that five major AI tools were routinely directing users to an anti-abortion hotline promoting an unproven treatment. The tool people trust for safety is simultaneously building a legal record and steering them toward misinformation. How to Protect Patients and Staff from AI Privacy Risks - Direct Patient Education: Tell patients explicitly that AI chatbots are not confidential and that conversations can be handed over to law enforcement with a warrant. Include this message in waiting rooms, on social media, and in post-appointment materials. Explain that conversations with healthcare providers are legally protected in ways that AI chatbots are not. - Update Deletion Guidance: Inform patients that if they have already used ChatGPT, Gemini, Claude, Perplexity, or Meta AI on free or paid consumer accounts, deleting those conversations may not have removed them from OpenAI's servers due to the May 2025 court preservation order. - Organizational AI Audits: Identify any staff members or patients using generative AI chatbots for anything reproductive health-adjacent. These interactions create records within the scope of legal discovery and warrant requests. - Recommend Secure Alternatives: Direct patients to resources like the Electronic Frontier Foundation's Surveillance Self-Defense guide and the Digital Defense Fund, which has been training abortion access organizations on digital security since 2017. OpenAI's January 2026 law enforcement policy confirms the company will disclose user content in response to valid warrants. This is not a hypothetical risk. The template exists, courts have validated it, and state attorneys general now have a working model for obtaining full account histories. ChatGPT Health, launched in January 2026, extends these risks further by encouraging users to connect Apple Health, MyFitnessPal, wearables, and medical records directly to ChatGPT. While the feature is designed to feel like talking to a doctor, it cannot be made HIPAA compliant regardless of its privacy features. It is not talking to a doctor, and conversations are not protected by doctor-patient confidentiality. "We haven't figured that out yet," said Sam Altman, regarding whether ChatGPT conversations should receive legal privilege like conversations with therapists, lawyers, or doctors. Sam Altman, CEO of OpenAI The safest guidance for reproductive health organizations is clear: patients should call clinics, text trusted contacts, or use encrypted messaging apps rather than AI chatbots for sensitive health questions. For those who have already used ChatGPT or similar tools, they should understand that their conversations may still exist on corporate servers under legal hold, accessible to law enforcement through warrant requests. Healthcare providers have a responsibility to communicate this reality directly to their patients before they turn to AI for answers they believe will remain private.