The question that exposes your AI risk

by Black Hat Middle East and Africa
on
The question that exposes your AI risk

Explore our weekly delivery of inspiration, insights, and exclusive interviews from the global BHMEA community of cybersecurity leaders.

Keep up with our weekly newsletters on LinkedIn — subscribe here. 


Exclusive interviews and insights from the global Black Hat MEA community – in your inbox every week. 

This week we’re focused on…

A question that tends to land with more silence than answers: who is actually in control of your AI systems?

Not in theory or on paper, but in production – where decisions are being made continuously. 

We asked Betania Allo (AI Governance and Trust Strategist): 

If you had five minutes with a board that thinks AI governance is 'handled by IT', how would you explain the business risk of not understanding their organisation’s control stack? 

She said: 

“I’ve been in that room more than once, and on both sides of the table.”

So what’s happening in the boardroom? 

For Allo, the starting point for understanding AI risk is personal: “everyone at the table is personally exposed.” 

Annual reports routinely assert that organisations maintain appropriate internal controls over material risks. AI systems making autonomous decisions now fall squarely into that category.

But treating them as ‘handled by IT’, she says, is “an assertion without architecture.”

And that issue only becomes visible when tested – in litigation, in regulatory scrutiny, or during an incident.

So she asks one question. 

“Can anyone here tell me which AI systems in your organisation are currently making autonomous decisions, what the defined boundaries of that autonomy are, and who holds override authority if one starts operating outside authorised parameters?”

Then she waits

“That question produces silence almost universally.”

What does the silence mean? 

More than just awkward, that silence is diagnostic

“That silence is the risk.”

It exposes a structural issue: organisations have been conditioned to see AI governance as a technical hygiene problem, rather than a control problem. As Allo puts it, “the underlying problem is cultural as much as technical.”

But culture alone won’t fix it.

For both CISOs and boards, this is where traditional assumptions start to break. Governance frameworks may exist, but without systems that define authority, monitor behaviour, and trigger intervention, they remain descriptive – not enforceable.

This is the change boards need to make 

The remedy, in Allo’s view, is operational.

“The ask is precise: mandate that your organisation maps its AI systems with the same rigour applied to critical IT infrastructure.”

That means clarity on ownership, decision authority, monitoring coverage, and escalation chains – not as abstract responsibilities, but as defined and testable controls.

And in regions investing heavily in AI, the stakes extend beyond the enterprise.

“Sovereign AI capability without sovereign AI control infrastructure is strategic exposure, not strategic advantage.”

If you want to learn more, we’ve been talking to Allo about this all week. 

Read this next: The Intelligent Control Stack – why AI needs a cybersecurity mindset 

Then read the full interview: Who’s in control? 

And when you’re ready, ask the question in your own boardroom – and see what happens. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Got conference fatigue? BHMEA can help.

Got conference fatigue? BHMEA can help.

Black Hat MEA is redefining the cybersecurity conference experience. CISOs share why it feels more intentional, collaborative and energising – and how that drives resilience.

Read More