Is shadow AI now the default enterprise architecture?

by Black Hat Middle East and Africa
on
Is shadow AI now the default enterprise architecture?

Shadow IT has been around for a long time – a phrase used to describe unsanctioned tools entering enterprise environments. Shadow AI, however, has brought the shadows to front-of-mind for organisations and security leaders across organisations. 

New research from Purple Book notes that 59% of organisations confirm or suspect employees are using AI tools without approval from IT or security teams, with 25.9% saying this is definitely happening and 32.6% reporting it is probably happening. 

This level of adoption places shadow AI firmly within the mainstream.

Two parallel architectures

Enterprises now operate with two overlapping AI environments:

  1. Approved systems tracked through governance
  2. Unapproved tools embedded in everyday workflows

The gap between these environments is measurable. Eighty-six percent of organisations claim a complete AI inventory, yet 57% of those same organisations still report shadow AI. 

Enterprise AI now runs on parallel tracks – one visible, one emergent.

Adoption outruns governance

The drivers of shadow AI sit in plain sight. According to Purple Book, 66% of organisations report extensive AI use in software development, and 78% have deployed or are piloting agentic AI systems capable of autonomous action. 

Simultaneously, Thoropass notes that 69% say AI adoption is ahead of security and compliance controls.

Business units pursue productivity gains, and developers integrate AI into workflows. Tools then spread through usage rather than approval.

It’s a data problem at its core

The primary risk associated with shadow AI centres on data.

The security leaders in Purple Book’s survey cited sensitive data exposure as their top concern. 

And the Thoropass research shows that employees using unapproved AI tools represent a major risk vector (38.2%), particularly when those tools process proprietary or regulated data. AI-related data misuse or exposure stands as the most likely trigger for regulatory and customer consequences (55.2%). 

Another industry pulse from ISACA highlights governance gaps around transparency and oversight, with many organisations lacking consistent disclosure or control mechanisms for AI-assisted work. 

From tools to actions

Importantly, shadow AI increasingly extends beyond individual tools.

CSA finds that AI agents now operate across development pipelines, security monitoring, and infrastructure management, with adoption rates reaching 67% for task automation and 50% across development and security use cases. Only 15% of organisations report no production use of AI agents. 

This progression moves the risk from unsanctioned usage to unsanctioned execution.

Governance sees part of the picture

Formal governance processes capture official deployments. Most organisations report structured processes for introducing AI into production environments.

But shadow AI exists outside those processes. It enters through experimentation, integrates into workflows, and scales through usage.

And the result of this is a meaningful gap between documented architecture and operational reality.

Security tooling adds another layer of complexity. Purple Book’s research shows over half of organisations run 11 or more security tools, while 81.6% report that tool fragmentation affects their ability to prioritise and remediate risks. Nearly half admit spending significant time on issues that carry limited impact.

How do we design for reality? 

Shadow AI reflects how modern enterprises adopt technology:

  • Fast, decentralised, and workflow-driven
  • Accessible without formal procurement
  • Embedded directly into daily operations

And addressing this requires a shift in approach:

  • Extend visibility into user behaviour and data flows
  • Focus on data governance rather than tool control
  • Integrate AI monitoring into existing security workflows
  • Align governance with real adoption patterns

Shadow AI is more of an architectural condition than an anomaly. And organisations that recognise this can design controls that operate across the full AI landscape – not just the part that is formally approved.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

The AI governance illusion

The AI governance illusion

AI governance looks strong on paper, but the data tells a different story. Here’s why AI visibility, identity and accountability still fall short of real control.

Read More