Are AI agents creating a new security risk?

by Black Hat Middle East and Africa
on
Are AI agents creating a new security risk?

The global Black Hat MEA community is building a more resilient future. Build it with us.

This week we’re focused on…

AI agents, and whether they’re creating a new security risk. 

At Black Hat MEA 2024, Craig Jones (Immediate Past Director Cybercrime at Interpol) told us he’s more excited about how people use tech than about the tech itself.

“You can do some great work using technology, even if you don’t fully understand it,” he said. But that also means we need checks and balances to make sure we don’t misuse it.

Jones believes that building a strong cyber culture, where people understand their roles and responsibilities, is more powerful than any algorithm. And with the emergence of AI agents, the need for that clear understanding is more important than ever.

How AI agents are growing the attack surface 

AI agents (also known as Agentic AI) are autonomous systems that can perceive, decide, and act to achieve specific goals. They’re already widely adopted: 82% of organisations are using them today, according to SailPoint’s latest research.

But more than half of these AI agents are accessing sensitive company data. And worse, 80% of organisations say their AI agents have performed unintended actions; everything from accessing inappropriate systems to sharing confidential data, or even revealing login credentials.

We’ll say that again: nearly one in four companies reported that AI agents had been coaxed into giving away access credentials.

A threat bigger than human or machine identities?

AI agents often hold more privileges than both human and machine identities. They typically need access to multiple systems, apps and data points to complete a task – and that makes them hard to govern and track.

A worrying 72% of tech professionals now say AI agents pose a greater risk than machine identities. And 54% agree that these agents have broader access to systems and data than human users. 

On top of this, their access is usually provisioned rapidly, and often only by IT – without oversight from compliance, HR or legal teams.

That lack of visibility is risky; only 52% of companies can currently audit the data accessed or shared by AI agents, which means nearly half don’t know if sensitive data is being exposed.

Governance isn’t catching up yet

There’s strong awareness of the risk: 96% of technology professionals consider AI agents a growing threat, and 92% say governing these agents is critical to enterprise security. But only 44% of organisations have governance policies in place for them.

This signals a communication gap. While IT teams are usually aware of the data AI agents can access (71%), that awareness drops off fast across the business. Just 47% of compliance teams, 39% of legal, and only 34% of executives are informed.

That matters. Because data breaches, compliance violations and reputational damage often stem not from malicious activity, but from poor visibility and oversight.

Still deploying despite the risk 

The warning signs are there. But the AI agent rollout shows no sign of slowing down. 

According to Sailpoint, a massive 98% of organisations plan to expand their use of AI agents in the next 12 months – across departments including cybersecurity, HR, software development and customer service.

So we’re charging ahead with deployment, but often without the policies or controls needed to manage the risk.

So, are AI agents a new security risk?

Absolutely. But not because the technology itself is flawed. The risk lies in how we use and manage it.

AI agents offer real value: they automate workflows, streamline operations, and scale intelligence. But they also blur the boundaries between human and digital identity. Their capacity to act without active input from humans (and access sensitive systems with minimal oversight) makes them uniquely risky.

To mitigate this, organisations must treat AI agents as first-class digital identities. That means:

  • Governing access with the same rigour we apply to humans
  • Creating audit trails of what data they touch and share
  • Involving legal, compliance and executive teams in access decisions
  • Rolling out identity security solutions tailored for multi-access agents

As Craig Jones reminded us, it’s not just about what the technology can do; it’s about how we choose to use it. With AI agents, we’re at a tipping point. Either we get ahead of the risk, or we wait for the consequences.

The tools are out there. The awareness is growing. What we need now is action.

Share your best three tips 

Cybersecurity practitioners and leaders, we want to hear from you. What are your top three tips for mitigating the risk of AI agents? Open this newsletter on LinkedIn and share them in the comment section. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles