Have you read/watched Dune?

by Black Hat Middle East and Africa
on
Have you read/watched Dune?

Explore our weekly delivery of inspiration, insights, and exclusive interviews from the global BHMEA community of cybersecurity leaders.

Keep up with our weekly newsletters on LinkedIn — subscribe here. 


Discover insights, inspiration and exclusive interviews from the global Black Hat MEA community – in your inbox every week. 

This week we’re focused on…

This line from Dune (the novel by Frank Herbert, now a major blockbuster movie) gets quoted a lot in strategy circles, because it’s so brutally simple:

“He who can destroy a thing, controls a thing.”

Have you read/watched Dune?

We encourage you to read it…

In the novel, it’s about economic choke points and political leverage. But the more we talk about AI, identity infrastructures, and machine-speed automation in security, the more these words echo in our heads. 

Because in 2026, the organisation that loses control will be the one whose dependencies can be disabled, corrupted or manipulated faster than they can respond.

Control comes down to what someone else can (or can’t) break. 

Identity: our new Arrakis

Think of identity as the ‘spice’ of the modern organisation. 

(If you haven’t read or watched Dune, let’s get you up to speed: Arrakis is the fictional planet where the series is set, and it’s famous for being the only source in the whole universe of a life-extending substance called spice). 

In organisations today, everything relies on identity – workforce access, service accounts, APIs, AI agents, supply chain integrations, and on and on. 

So when an attacker steals credentials, they can destabilise the system itself. 

If they can disrupt your identity provider or hijack a non-human identity, they don’t need a foothold. They have the kill switch. If they want to, they can break your control plane – and that means they can control you.

That’s Herbert’s rule, translated directly into modern security practice.

AI models offer power through subversion 

Then there’s AI model integrity – a frontier we still talk about as if it’s theoretical.

But we need to treat it as a real, present threat. 

Because AI systems are increasingly making decisions on their own. They triage alerts, classify documents, route transactions, analyse patterns, and act as intermediaries in workflows.

And the weakness isn’t always the model; it might be the training data, or the prompts, or the embedded behaviours. If you poison the data then the model becomes compromised. If you can subvert the agent, then the workflow becomes unsafe.

Again: he who can corrupt a thing, controls it. 

When we interviewed Nikk Gilbert (CISO at RWE) for the blog, he said: 

“The risk that keeps me up at night is trust in machine decision-making. We’re handing over authority to AI systems in finance, logistics, and energy faster than we can test the edges. Rather than bias or privacy, the real danger is what happens when these systems act on poisoned or manipulated data at machine speed.

“There’s no safety net when decisions outpace human reaction time. By the time we realise something has gone wrong, the damage will already be done.”

It’s Herbert’s lesson again: if someone else can break the system faster than you can intervene, they hold the real power. 

This is why resilience will define 2026 

Capability is one thing – being able to detect, to automate, to create more identity types, to deploy more AI agents. 

But 2026 will be a good year for those who think in reverse, and focus instead on what happens if their systems (both the tech systems and the human systems) fail. 

Forward-thinking CISOs are considering what is and isn’t recoverable. They’re looking at identities, and how quickly those identities can be isolated. They’re stress-testing systems against AI models that make dangerous decisions. They’re developing routes to detect data poisoning or manipulation pre-execution; and they’re checking whether they can turn off an AI capability safely and quickly if they need to. 

A final thought from Arrakis

In Dune, control belonged to the one player who could dictate the value of a thing by being able to break it. 

And in a way, cybersecurity is entering that phase now.

So let’s focus on resilience together. 

Read more: Identity, sensitive data, AI agents – the risks walking with us into 2026

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Why we need AI to get boring

Why we need AI to get boring

A contrarian take on AI in 2026: why security teams may actually need AI to become dull, predictable, and standardised – and how that shift could reduce risk.

Read More