AI agent incidents: the gap between expectations and budgets

by Black Hat Middle East and Africa
on
AI agent incidents: the gap between expectations and budgets

New data from Arkose Labs shows that enterprise leaders are bracing for the impact of AI agent incidents – but spending and governance still don’t treat this as a ‘now’ problem. 

In the 2026 report on agentic AI security, 97% of enterprise leaders say they expect a material AI-agent-driven security or fraud incident within 12 months. Nearly half (49%) expect one within six months. At the same time, organisations allocate roughly 6% of security budgets to AI-agent risk, and 10% do not track that risk separately at all.

We have to close that gap 

For all the noise around agentic AI, the report points to a more practical problem inside large organisations: AI agents are moving into real workflows faster than governance, identity controls and investigative visibility can keep up. Arkose’s respondents span 300 enterprise leaders across security, fraud, identity and AI functions, with the survey conducted globally in February 2026.

At Black Hat MEA 2025, William Lin (Co-founder and CEO at AKA Identity) said: 

“Today I’m spending a lot of time thinking about one specific place within AI – and that is agents. The idea of agents doing autonomous work, the idea of agents doing reasoning on behalf of an organisation.” 

That’s what this Arkose Labs report is about – autonomous systems acting inside the enterprise. 

The problem is identity as much as intelligence

The report keeps returning to the reality that AI agents do not always look like intruders. They often operate with legitimate credentials, through service accounts, API tokens and approved application identities. That makes them harder to isolate and harder to investigate – and what they do often looks like routine business activity. 

So it makes sense that 87% of respondents agree that AI agents operating with legitimate credentials pose a greater insider threat risk than human employees.

This is difficult to manage, because many enterprise controls were built around a simpler model: a human user and a suspicious action, along with a perimeter to defend. Agentic systems are messier. They move across workflows and create long decision chains that can be difficult to reconstruct after the fact.

Arkose’s numbers on attribution are especially telling. Only 26% say they are very confident they could definitively prove whether an AI agent caused a security or fraud incident. That’s a serious weakness in any future where autonomous agents handle onboarding, transactions, customer operations or internal decision support.

Or, as the report puts it: “Security teams cannot investigate what they cannot observe.”

Leaders see the risk, but organisations are still underprepared 

Of the 300 leaders surveyed, 76% say the C-suite is either not deeply involved in AI-agent security decisions or lacks a strong understanding of the risks. Meanwhile, 57% of organisations report having no formal AI-agent governance controls today, even though 88% expect defined or advanced frameworks within three years.

That sounds like a familiar enterprise habit: consensus on the danger, followed by a long queue for ownership, budget and enforcement.

Gary Hayslip (CISO at Halcyon) described the broader dynamic like this at Black Hat MEA 2025:

“Security is playing catch up to what the cyber criminals are doing. They can scale and be innovative a lot faster than the security community can.” 

In the context of this report, that line lands hard. Enterprises do understand the risk – but the issue is whether their operating model can move at the same pace as the systems they’re deploying.

And there’s another twist here: 78% believe current tools can distinguish malicious AI agents today, yet roughly 72% are concerned those defences will struggle as attacks evolve. So confidence does exist, but it comes with an expiry date.

Agentic AI risk is real now 

This isn’t a future debate. It’s an identity, governance and attribution problem arriving inside current enterprise systems.

If you’re facing this problem in your organisation, you first need to treat non-human identities as first-class security entities to mitigate exposure. 

Second, build visibility into automated decision chains before an incident forces the issue. Attribution after the fact is a really bad time to discover blind spots.

And third, bring security leadership into AI deployment early. Retrofitting controls onto autonomous systems is how organisations end up with impressive pilots and fragile operations.

When we spoke to Stefan Baldus (CISO at Hugo Boss) at BHMEA25, he offered a note we can end on here: 

“I think the combination of human and AI will be shaping us next year.” 

Agentic systems may be arriving fast, but resilience still depends on who governs them, and who can explain what they just did.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Can movie fans become threat actors?

Can movie fans become threat actors?

Generative AI is turning fans into content creators – and potential brand risks. Find out how AI-generated trailers and fan content are reshaping IP protection and cybersecurity strategy.

Read More