Bring hiring scams into your corporate deception training
Six in ten job seekers faced fake recruiters in 2025. Here’s what those scams teach CISOs about human psychology – and why security awareness needs a rewrite.
Read More
Explore our weekly delivery of inspiration, insights, and exclusive interviews from the global BHMEA community of cybersecurity leaders.
Keep up with our weekly newsletters on LinkedIn — subscribe here.
Expand your perspective and prepare for cyber resilience in 2026, with inspiration and insights from the global Black Hat MEA community.
Being boring.
Well, specifically on the idea of AI being boring.
Because we’ve been reading new research about AI-driven security risks this week (we went deep into it on the blog – you can read that here).
And it got us thinking that maybe the best thing that could happen to AI in 2026 is that it becomes really, really boring.
Because security teams rarely fear boring technologies. No one wakes up dreading a JSON parser or wondering whether their SAML configuration has suddenly developed a personality.
Boring tech behaves the way you expect. Boring tech doesn’t hallucinate. Boring tech doesn’t decide to install a library you’ve never heard of.
AI, at the moment, is the opposite of boring.
It moves fast. It evolves faster. And right now, it sits in the unpredictability zone security teams dislike the most: the zone where it’s impossible to tell exactly where it’s used, how it’s behaving, or what it’s silently wiring into the codebase.
Which is exactly why the security industry may need AI to settle down.
Cycode’s survey on the state of product security for the AI era found that 100% of surveyed organisations already have AI-generated code in production, yet only 19% of security leaders say they have complete visibility into how AI is used across development. A full 65% report that vulnerability counts have increased since adopting AI coding assistants.
This is the dangerous middle phase of a technology boom: universal adoption without universal oversight.
And research from Endor Labs’ shows what that looks like on the ground. AI coding agents confidently recommend dependencies that don’t exist (34% of suggested versions were hallucinated), or that are known to be vulnerable (49%).
Only one in five dependency versions pulled in by agents was safe without additional controls.
And the connective tissue is equally unpredictable. Their analysis of more than 10,000 Model Context Protocol (MCP) servers (the integration layer powering AI agents) found that 75% are maintained by individuals, 41% have no licence information, and 82% rely on sensitive APIs such as file system access or code execution. Each introduces around three vulnerable dependencies on average.
We’re not suggesting this is malicious. It’s just new – and new tech creates new attack surfaces faster than governance can form.

When technologies stabilise, security becomes dramatically easier.
The cloud didn’t become defensible until architectures, IAM patterns, and Zero Trust models stabilised. DevOps didn’t scale securely until we standardised around CI/CD pipelines, container images, and SAST/SCA workflows.
AI hasn’t reached that point. Its behaviour is still varied; its supply chain is still chaotic; its failure modes are still undocumented.
Predictable (or boring) AI would look very different:
As Google Cloud’s 2026 threat forecast warns, threat actors will soon treat AI as a default tool. If defenders are stuck in the novelty phase while attackers industrialise, it’s not a fair fight.
At this stage, we don’t really believe AI is going to become boring any time soon.
But in the coming year, the combination of rising AI-driven attacks, growing regulatory pressure, and the operational realities inside engineering teams may finally push organisations from experimentation into discipline.
Cycode notes that 97% of organisations plan to consolidate their AppSec tooling this year – that’s a sign that clarity and oversight are becoming priorities again.
Excitement built the AI boom, but predictability will make it sustainable.
Read more on the blog: Will CISOs be under even more pressure in 2026?
Join the newsletter to receive the latest updates in your inbox.
Six in ten job seekers faced fake recruiters in 2025. Here’s what those scams teach CISOs about human psychology – and why security awareness needs a rewrite.
Read More
Why purple teaming matters: learning, humility, and collaboration over confidence. A Zen mindset for modern cyber resilience.
Read More
Across Saudi Arabia and the GCC, hands-on cybersecurity simulation and CTF programmes are producing world-class talent – and transforming how the next generation learns to defend the digital world.
Read More