The Intelligent Control Stack: why AI needs a cybersecurity mindset
Discover a practical model for AI governance that applies cybersecurity principles to real-time monitoring, control, and accountability in production systems.
Read More
“The Stack was designed to replace behavioural dependency with enforceable control architecture. That’s the shift most organisations haven’t made.”
In our previous blog post, we looked at the fundamentals of the Intelligent Control Stack – Betania Allo’s framework for treating AI systems like critical infrastructure, not experimental tools.
But the stack raises harder questions – so we asked Allo to tell us more.
“In practice, it breaks down in organisational governance, but not for the reasons most people cite. The standard answer is culture and awareness, and while those matter, they’re symptoms. The underlying condition is the absence of enforceable control architecture.
“Across client engagements in regulated sectors across the Gulf, I see the same pattern consistently. Organisations have even appointed responsible AI officers, published internal policies, run awareness programmes. Then I ask one question: show me your production monitoring architecture for that system.
“Nine out of ten times the room goes quiet.
“The awareness existed but the control infrastructure didn’t. Those are not the same thing, and conflating them is one of the most expensive mistakes I encounter.
“At a semi-government client company, working at the intersection of cybersecurity strategy and sovereign infrastructure, the failure point wasn’t the model itself; it was the absence of anyone with simultaneous visibility into system behaviour and pre-assigned authority to act on what they saw. Governance had been treated as a pre-deployment activity. Once the system was live, the structure dissolved.
“The question of who is in control sounds philosophical until you’re inside an incident. Then it becomes very operational, very fast. Culture doesn’t stop a model from drifting. Awareness programmes don’t trigger escalation at a threshold breach. The Stack was designed to replace behavioural dependency with enforceable control architecture. That’s the shift most organisations haven’t made.”
“When I built the AI and TMT practice at Legal Tracks from scratch, one of the first things I had to confront was that Saudi organisations are simultaneously managing PDPL, NCA ECC-2, SDAIA’s AI ethics principles, and the extraterritorial reach of the EU AI Act. The instinct is to run parallel compliance programmes for each framework. That instinct is expensive and it creates the gaps it’s supposed to close.
“The Stack’s design logic addresses this directly. NIST, ISO 42001, and the EU AI Act converge at the control action level. They all require continuous monitoring in production, structured escalation at threshold breach, and durable audit trails a regulator can verify. The Stack engineers those as a single control infrastructure rather than three parallel workstreams. One telemetry layer. One escalation protocol. One audit record satisfying multiple obligations simultaneously.
“In practice, operationalisation starts with a map: which systems are in production, what decisions are they authorised to make autonomously, who holds override authority. That third question is where most organisations stall. Not because the answer is complicated, but because nobody has ever been asked to answer it formally.
“Once the map exists, the control plane takes over. Layer 2 instruments production behaviour. Layer 3 defines the precise thresholds at which human authority must activate. Layer 4 ensures the system learns from every incident rather than repeating it. Layer 5 translates all of it into board-level accountability. One architecture, multiple obligations satisfied simultaneously. Culture and awareness don't close instrumentation gaps – but a control plane does.”
“I’m a lawyer and policy expert who has spent years inside cybersecurity functions at sovereign infrastructure scale, including in my doctoral studies. I’ll be blunt: the discipline I trained in is structurally insufficient for adaptive systems, and the gap is widening as deployment accelerates.
“Three failures. First, classical security assumes a stable attack surface between hardening cycles. Adaptive systems have dynamic behavioural surfaces. A model that passed every validation check at deployment can be progressively manipulated through its live input distribution. There is no CVE for this. Most SIEM architectures have no detection logic for it. Traditional cybersecurity looks outward for attackers. AI cybersecurity looks inward at the system’s own behaviour, decisions, and emergent risks.
“Second, the CIA triad is the wrong risk taxonomy. A system can be fully intact across confidentiality, integrity, and availability and simultaneously produce systematically biased, manipulated, or legally indefensible decisions. That failure mode is invisible to conventional monitoring.
“Third, and this is where my work on organised crime and emerging technology becomes directly relevant: adversarial manipulation of AI decision systems is not a theoretical concern in contested environments. The question of who controls a model’s input distribution when adversarial actors are actively probing it is a live operational question. Most corporate governance frameworks aren’t built for that threat surface at all.
“What needs to change fastest is detection logic at the model output layer. CISOs need to instrument what the system is deciding, not just whether the perimeter is clean. Awareness of these risks is now reasonably widespread. Operational response to them is not.”
“A policy that names a responsible officer for AI risk without giving that officer operational visibility is just a liability designation. We need accountability. That’s critical when something goes wrong, and it matters even more when a regulator starts asking questions.
“Across client engagements I ask these diagnostic questions: who holds override authority over your autonomous systems, have they been formally told, and has that authority been tested before you needed it in a live incident?
“The answer is almost universally no on all three counts. The awareness of risk exists, but the enforceable accountability structure does not.
“In Saudi Arabia, reporting to the CISO and working directly with senior leadership on enterprise-wide cybersecurity governance, I learned that accountability functions only when three conditions are simultaneously true: the responsible person has real-time visibility into system behaviour, they have pre-defined authority to act at specific thresholds, and there is an auditable record of both the signal and the response.
“The Stack creates those conditions architecturally. Layer 2 generates the continuous behavioural record. Layer 3 defines the precise conditions under which human intervention is mandatory and timestamps that intervention. Layer 4 is where most frameworks stop existing and the Stack keeps going: incident history actively recalibrates future control logic, so the architecture gets harder to breach over time. Layer 5 gives boards evidence-grounded exposure data rather than filtered narrative.
“In the sovereign context this is not abstract. When an AI system inside national critical infrastructure fails, the accountability chain must be pre-defined and defensible to a regulator or a court. You cannot construct it after the incident. That pre-defined chain is exactly what the Stack provides.”
Thanks to Betania Allo. Join us at Black Hat MEA 2026 to immerse yourself in the heart of cybersecurity, and build resilience for the future.
Join the newsletter to receive the latest updates in your inbox.
Discover a practical model for AI governance that applies cybersecurity principles to real-time monitoring, control, and accountability in production systems.
Read More
Third-party cyber risk now stems more from shared dependencies than weak vendors. New research explains why supplier concentration is becoming the biggest supply chain threat.
Read More
What life as a CISO looks like in 2026: overtime, AI governance, board pressure and burnout – plus why most cybersecurity leaders would still choose the job all over again.
Read More