
Why don’t business leaders trust Gen Z?
A survey finds 52% of leaders see Gen Z as a security risk, 47% fear leaks, 18% report actual incidents. Is this mistrust fair, and what’s really driving it?
Read MoreBuild cyber resilience with exclusive interviews and insights from the global Black Hat MEA community.
Generative AI governance.
Because a sweeping US court order in May 2025 changed the rules for generative AI governance.
In a legal case between The New York Times and OpenAI, a federal judge ordered the preservation of all user data – including deleted ChatGPT conversations. And that precedent is creating waves for security and privacy teams worldwide.
We asked Betania Allo (Cybersecurity Lawyer and Policy Strategist) why this matters so much, and she said:
“Even users who had disabled chat history or deleted conversations could no longer assume their data was erased. That data had to be preserved – not by corporate policy, but by judicial mandate.”
Allo explained, "The court’s preservation order introduced a new precedent in AI governance...overriding normal retention and deletion policies.”
This means CISOs can no longer rely on standard data minimisation practices to meet their legal or regulatory obligations.
For European organisations in particular, this ruling conflicts with GDPR’s principles of data minimisation and the right to erasure. "This directly challenges GDPR principles,” Allo warned, “but this technical safeguard does not negate the broader conflict between jurisdictional privacy norms and extraterritorial legal mandates.”
According to Allo:
Most importantly, CISOs must rethink how they define data control. Even deleted inputs, logs, and experimentation data may now be considered legal evidence.
“Deleted prompts may still be accessible. Experimentation logs may become subpoenaed records,” Allo noted. And while OpenAI has introduced a vetting system to limit internal access, that safeguard is not externally audited or governed.
“‘Vetting’ is not a legal standard," she said, highlighting the lack of oversight around who accesses retained data.
This is a wake-up call for every organisation using generative AI. Legal discoverability has entered the AI risk matrix – and the only proven way to avoid unexpected exposure is to enforce Zero Data Retention (ZDR) wherever possible.
“ZDR is no longer a nice-to-have but an essential safeguard,” Allo said.
Head to the blog to read more of our conversation with Allo.
And get your pass to attend Black Hat MEA 2025 – to make sure your organisation stays ahead of the curve.
Join the newsletter to receive the latest updates in your inbox.
A survey finds 52% of leaders see Gen Z as a security risk, 47% fear leaks, 18% report actual incidents. Is this mistrust fair, and what’s really driving it?
Read MoreCISOs Nikk Gilbert (RWE) and Stefan Baldus (HUGO BOSS) explain why human fallibility and awareness matter more than any tech stack.
Read MoreEvery breach leaves a trail. Learn why digital forensics training at BHMEA 2025 is about connecting the dots and telling the story of an attack.
Read More