The AI governance illusion
AI governance looks strong on paper, but the data tells a different story. Here’s why AI visibility, identity and accountability still fall short of real control.
Read More
Shadow IT has been around for a long time – a phrase used to describe unsanctioned tools entering enterprise environments. Shadow AI, however, has brought the shadows to front-of-mind for organisations and security leaders across organisations.
New research from Purple Book notes that 59% of organisations confirm or suspect employees are using AI tools without approval from IT or security teams, with 25.9% saying this is definitely happening and 32.6% reporting it is probably happening.
This level of adoption places shadow AI firmly within the mainstream.
Enterprises now operate with two overlapping AI environments:
The gap between these environments is measurable. Eighty-six percent of organisations claim a complete AI inventory, yet 57% of those same organisations still report shadow AI.
Enterprise AI now runs on parallel tracks – one visible, one emergent.
The drivers of shadow AI sit in plain sight. According to Purple Book, 66% of organisations report extensive AI use in software development, and 78% have deployed or are piloting agentic AI systems capable of autonomous action.
Simultaneously, Thoropass notes that 69% say AI adoption is ahead of security and compliance controls.
Business units pursue productivity gains, and developers integrate AI into workflows. Tools then spread through usage rather than approval.
The primary risk associated with shadow AI centres on data.
The security leaders in Purple Book’s survey cited sensitive data exposure as their top concern.
And the Thoropass research shows that employees using unapproved AI tools represent a major risk vector (38.2%), particularly when those tools process proprietary or regulated data. AI-related data misuse or exposure stands as the most likely trigger for regulatory and customer consequences (55.2%).
Another industry pulse from ISACA highlights governance gaps around transparency and oversight, with many organisations lacking consistent disclosure or control mechanisms for AI-assisted work.
Importantly, shadow AI increasingly extends beyond individual tools.
CSA finds that AI agents now operate across development pipelines, security monitoring, and infrastructure management, with adoption rates reaching 67% for task automation and 50% across development and security use cases. Only 15% of organisations report no production use of AI agents.
This progression moves the risk from unsanctioned usage to unsanctioned execution.
Formal governance processes capture official deployments. Most organisations report structured processes for introducing AI into production environments.
But shadow AI exists outside those processes. It enters through experimentation, integrates into workflows, and scales through usage.
And the result of this is a meaningful gap between documented architecture and operational reality.
Security tooling adds another layer of complexity. Purple Book’s research shows over half of organisations run 11 or more security tools, while 81.6% report that tool fragmentation affects their ability to prioritise and remediate risks. Nearly half admit spending significant time on issues that carry limited impact.
Shadow AI reflects how modern enterprises adopt technology:
And addressing this requires a shift in approach:
Shadow AI is more of an architectural condition than an anomaly. And organisations that recognise this can design controls that operate across the full AI landscape – not just the part that is formally approved.
Join the newsletter to receive the latest updates in your inbox.
AI governance looks strong on paper, but the data tells a different story. Here’s why AI visibility, identity and accountability still fall short of real control.
Read More
Betania Allo explains why AI governance fails in practice and how the Intelligent Control Stack brings real-time monitoring, accountability, and control to autonomous systems in production.
Read More
Discover a practical model for AI governance that applies cybersecurity principles to real-time monitoring, control, and accountability in production systems.
Read More