Cybersecurity conversations lately have been dominated by AI risk. But while defenders focus on machines, attackers are still doing what they’ve always done best: exploiting people.
According to KnowBe4’s State of Human Risk 2025 report, 90% of cybersecurity leaders saw an increase in incidents linked to the human element over the last 12 months. More starkly, every single leader experienced at least one employee-related security incident in that period.
Human risk is structural. And while many organisations focus on getting AI under control, it’s getting more dangerous.
The human attack surface is expanding, not shrinking
Unsurprisingly to any cybersecurity practitioner, email remains the primary entry point. A significant 64% of organisations suffered incidents caused by phishing, and 57% saw those incidents increase year on year. But the report makes it clear this is no longer just an email problem.
Successful attacks via Teams and Slack (39%), social media accessed on corporate devices (36%), and SMS phishing (31%) point to what the authors call ‘boundaryless phishing’. The perimeter has dissolved; human judgement is now the control plane.
This is really important, because the consequences are severe: 83% of organisations experienced account takeover, with phishing emails responsible for 59% of those breaches.
Even when incidents aren’t malicious, they’re still costly
Human risk doesn’t only exist because of external attackers. Of all of the cybersecurity leaders included in KnowBe4’s research, 90% reported incidents caused by employees making mistakes, from misdirected emails to oversharing via collaboration tools.
Deliberate insider activity is less common, but more dangerous when it does happen. Intentional insider incidents were reported by 36% of leaders, and only 6% were stopped before damage occurred. Data leaks to competitors, public disclosure, and data taken to new jobs were the most common outcomes.
As Nikk Gilbert (CISO at RWE) said when we asked him about the experiences he’s learnt the most from across his career:
“The military taught me the hardest lesson. You can have the best plan, the strongest team, and absolute clarity of mission. Yet, one small mistake – fatigue, pride, distraction – can completely alter the outcome. That truth never left me. Risk is not just technology; it is people. Strength comes from accepting human fallibility and building systems that can withstand it, not ignoring it.”
The KnowBe4 findings make the same point at scale: incidents don’t require mass failure at all – just a moment of human vulnerability in the wrong place.
Culture is the uncomfortable middle layer
The report exposes a deep cultural disconnect. Only 53% of employees believe the data they work with belongs to their organisation. Just 29% think everyone is personally responsible for protecting company data; most believe security is someone else’s job.
At the same time, 94% of employees would change something about their organisation’s cybersecurity programme, favouring proactive tools and personalised support over punishment. Yet 84% of leaders believe disciplinary procedures are effective. This mismatch is undermining trust and reporting.
AI raises the stakes for human error
Instead of replacing human risk, AI has increased its potential for harm. Incidents involving AI applications rose by 43% – that’s the second highest increase after email.
And employees are worried: 86% fear being tricked by deepfakes. But governance hasn’t caught up; 17% admit using AI tools at work without permission, creating fertile ground for shadow AI and data leakage.
Why ‘human risk management’ keeps coming up
KnowBe4’s research has come up with a solid agreement on the solution. Only 16% of organisations have a well-established human risk management (HRM) programme, and just 29% have excellent visibility into human risk.
But 97% of cybersecurity leaders want more budget to secure the human element, and over 90% believe real-time coaching, personalised training, and cross-platform visibility are effective.
That gap is where execution needs to happen.
Because as AI continues to change the threat landscape, the most predictable failures remain human. In 2026, getting back to being human may be the most technical security decision a CISO makes.