
Why GenAI puts privacy officers on legal ground zero
Learn what the recent New York Times vs. OpenAI case means for privacy officers and privacy policies – across all organisations that leverage generative AI.
Read MoreThe legal implications of generative AI just got more complex. With a landmark preservation order issued against OpenAI, privacy and security leaders are facing new, uncomfortable questions. We asked Betania Allo (Cybersecurity Lawyer and Policy Strategist) what this means for CISOs – and why ‘delete’ no longer means disappear.
“This case reframes how cybersecurity, privacy, and data governance must evolve in the era of generative AI. For CISOs, CAIOs, DPOs, and legal teams, the message is clear: your AI architecture is no longer just infrastructure. It can become evidence.”
“The court’s preservation order introduced a new precedent in AI governance. A preservation order requires companies to retain all data that could be relevant to a legal case, overriding normal retention and deletion policies. Discovery allows legal teams to request and inspect logs, prompts, outputs, training data, and more. A legal hold freezes deletion policies and activates enterprise-wide data retention protocols.”
“Deleted prompts may still be accessible. Experimentation logs may become subpoenaed records. Privacy policies may no longer protect users if legal orders require data retention. While OpenAI continues to appeal the order, the indefinite retention of data from non-enterprise ChatGPT users may leave organisations exposed to regulatory risk under GDPR and similar frameworks.”
“CISOs should begin with a comprehensive inventory of all AI systems that store logs, ensuring a clear separation between production and experimental environments, and enforcing ZDR where feasible. Existing data deletion policies should be reassessed in light of litigation hold scenarios, as information once considered ephemeral may now be preserved indefinitely.”
“OpenAI has assured users that only a ‘vetted legal and security team’ can access the chat data retained under the court order. But what does that really mean? For privacy and security professionals, ‘vetting’ is not a legal standard: it’s an internal designation, with no external audit, regulatory oversight, or independent verification attached.”
For CISOs, generative AI is no longer just a data privacy concern. It’s a very real legal risk. In Allo’s words:
“This case is about much more than copyright. It illustrates what happens when highly complex, probabilistic technologies collide with legal systems designed around traceability, accountability, and evidence preservation.
“We once thought of generative AI as experimental or even ephemeral. Now, it must also be viewed as legally actionable. For CISOs and governance leaders, this shifts the mission. We are no longer just securing systems. We are managing the digital memories of machines – and ensuring those memories can withstand legal scrutiny.”
Connect with Betania Allo on LinkedIn. Join us at Black Hat MEA 2025 to stay ahead of the security curve.
Join the newsletter to receive the latest updates in your inbox.
Learn what the recent New York Times vs. OpenAI case means for privacy officers and privacy policies – across all organisations that leverage generative AI.
Read MoreBernard Assaf (CISO at Airbus) shares insights on security culture, balancing innovation with governance, and why relationships matter more than titles.
Read MoreCybersecurity leader Matthias Muhlert (Cyber Chef at Dr. August Oetker KG - Die Oetker-Gruppe) shares five fables to help you see communication in a new light.
Read More