
New research proves we have to focus on human behaviour
Data on cybercriminal strategy and VEC attack response shows that human behaviour is cybersecurity’s greatest challenge.
Read MoreThe global Black Hat MEA community is building a more resilient future. Build it with us.
AI agents, and whether they’re creating a new security risk.
At Black Hat MEA 2024, Craig Jones (Immediate Past Director Cybercrime at Interpol) told us he’s more excited about how people use tech than about the tech itself.
“You can do some great work using technology, even if you don’t fully understand it,” he said. But that also means we need checks and balances to make sure we don’t misuse it.
Jones believes that building a strong cyber culture, where people understand their roles and responsibilities, is more powerful than any algorithm. And with the emergence of AI agents, the need for that clear understanding is more important than ever.
AI agents (also known as Agentic AI) are autonomous systems that can perceive, decide, and act to achieve specific goals. They’re already widely adopted: 82% of organisations are using them today, according to SailPoint’s latest research.
But more than half of these AI agents are accessing sensitive company data. And worse, 80% of organisations say their AI agents have performed unintended actions; everything from accessing inappropriate systems to sharing confidential data, or even revealing login credentials.
We’ll say that again: nearly one in four companies reported that AI agents had been coaxed into giving away access credentials.
AI agents often hold more privileges than both human and machine identities. They typically need access to multiple systems, apps and data points to complete a task – and that makes them hard to govern and track.
A worrying 72% of tech professionals now say AI agents pose a greater risk than machine identities. And 54% agree that these agents have broader access to systems and data than human users.
On top of this, their access is usually provisioned rapidly, and often only by IT – without oversight from compliance, HR or legal teams.
That lack of visibility is risky; only 52% of companies can currently audit the data accessed or shared by AI agents, which means nearly half don’t know if sensitive data is being exposed.
There’s strong awareness of the risk: 96% of technology professionals consider AI agents a growing threat, and 92% say governing these agents is critical to enterprise security. But only 44% of organisations have governance policies in place for them.
This signals a communication gap. While IT teams are usually aware of the data AI agents can access (71%), that awareness drops off fast across the business. Just 47% of compliance teams, 39% of legal, and only 34% of executives are informed.
That matters. Because data breaches, compliance violations and reputational damage often stem not from malicious activity, but from poor visibility and oversight.
The warning signs are there. But the AI agent rollout shows no sign of slowing down.
According to Sailpoint, a massive 98% of organisations plan to expand their use of AI agents in the next 12 months – across departments including cybersecurity, HR, software development and customer service.
So we’re charging ahead with deployment, but often without the policies or controls needed to manage the risk.
Absolutely. But not because the technology itself is flawed. The risk lies in how we use and manage it.
AI agents offer real value: they automate workflows, streamline operations, and scale intelligence. But they also blur the boundaries between human and digital identity. Their capacity to act without active input from humans (and access sensitive systems with minimal oversight) makes them uniquely risky.
To mitigate this, organisations must treat AI agents as first-class digital identities. That means:
As Craig Jones reminded us, it’s not just about what the technology can do; it’s about how we choose to use it. With AI agents, we’re at a tipping point. Either we get ahead of the risk, or we wait for the consequences.
The tools are out there. The awareness is growing. What we need now is action.
Cybersecurity practitioners and leaders, we want to hear from you. What are your top three tips for mitigating the risk of AI agents? Open this newsletter on LinkedIn and share them in the comment section.
Join the newsletter to receive the latest updates in your inbox.
Data on cybercriminal strategy and VEC attack response shows that human behaviour is cybersecurity’s greatest challenge.
Read MoreWhy pivoting your cybersecurity career is good for professional growth and industry resilience.
Read MoreGet the lowdown on five of the most damaging cyberattacks so far in 2025.
Read More