GenAI just got legally complicated

by Black Hat Middle East and Africa
on
GenAI just got legally complicated

Build cyber resilience with exclusive interviews and insights from the global Black Hat MEA community. 

This week we’re focused on…

Generative AI governance. 

Why? 

Because a sweeping US court order in May 2025 changed the rules for generative AI governance. 

In a legal case between The New York Times and OpenAI, a federal judge ordered the preservation of all user data – including deleted ChatGPT conversations. And that precedent is creating waves for security and privacy teams worldwide.

We asked Betania Allo (Cybersecurity Lawyer and Policy Strategist) why this matters so much, and she said: 

“Even users who had disabled chat history or deleted conversations could no longer assume their data was erased. That data had to be preserved – not by corporate policy, but by judicial mandate.” 

It reframes the AI governance landscape 

Allo explained, "The court’s preservation order introduced a new precedent in AI governance...overriding normal retention and deletion policies.”

This means CISOs can no longer rely on standard data minimisation practices to meet their legal or regulatory obligations.

For European organisations in particular, this ruling conflicts with GDPR’s principles of data minimisation and the right to erasure. "This directly challenges GDPR principles,” Allo warned, “but this technical safeguard does not negate the broader conflict between jurisdictional privacy norms and extraterritorial legal mandates.” 

So what does this mean for CISOs and DPOs? 

According to Allo:

  • “CISOs should begin with a comprehensive inventory of all AI systems that store logs.”
  • “Existing data deletion policies should be reassessed in light of litigation hold scenarios.”
  • “DPOs should revisit vendor risk assessments and revise privacy notices.”

Most importantly, CISOs must rethink how they define data control. Even deleted inputs, logs, and experimentation data may now be considered legal evidence.

“Deleted prompts may still be accessible. Experimentation logs may become subpoenaed records,” Allo noted. And while OpenAI has introduced a vetting system to limit internal access, that safeguard is not externally audited or governed. 

“‘Vetting’ is not a legal standard," she said, highlighting the lack of oversight around who accesses retained data.

This is a wake-up call for every organisation using generative AI. Legal discoverability has entered the AI risk matrix – and the only proven way to avoid unexpected exposure is to enforce Zero Data Retention (ZDR) wherever possible.

“ZDR is no longer a nice-to-have but an essential safeguard,” Allo said. 

The TL;DR for CISOs 

  • Deleted prompts may still be discoverable
  • Privacy promises must reflect legal exceptions
  • Legal systems now view AI logs as evidence
  • ZDR is the strongest mitigation

Head to the blog to read more of our conversation with Allo. 

And get your pass to attend Black Hat MEA 2025 – to make sure your organisation stays ahead of the curve. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

A compass for CISOs

A compass for CISOs

Two cybersecurity leaders explain why communication is a CISO’s sharpest tool, and how stories (not just stats) can build real resilience.

Read More
Why vibe coding creates hidden risk

Why vibe coding creates hidden risk

Discover why ‘vibe coding’ is opening up new risks in cybersecurity, as AI-generated code and hidden misconfigurations create an urgent need for human oversight.

Read More