Why we need AI to get boring

by Black Hat Middle East and Africa
on
Why we need AI to get boring

Explore our weekly delivery of inspiration, insights, and exclusive interviews from the global BHMEA community of cybersecurity leaders.

Keep up with our weekly newsletters on LinkedIn — subscribe here. 


Expand your perspective and prepare for cyber resilience in 2026, with inspiration and insights from the global Black Hat MEA community. 

This week we’re focused on…

Being boring

Well, specifically on the idea of AI being boring. 

Why? 

Because we’ve been reading new research about AI-driven security risks this week (we went deep into it on the blog – you can read that here). 

And it got us thinking that maybe the best thing that could happen to AI in 2026 is that it becomes really, really boring.

Because security teams rarely fear boring technologies. No one wakes up dreading a JSON parser or wondering whether their SAML configuration has suddenly developed a personality

Boring tech behaves the way you expect. Boring tech doesn’t hallucinate. Boring tech doesn’t decide to install a library you’ve never heard of.

AI, at the moment, is the opposite of boring.

It moves fast. It evolves faster. And right now, it sits in the unpredictability zone security teams dislike the most: the zone where it’s impossible to tell exactly where it’s used, how it’s behaving, or what it’s silently wiring into the codebase.

Which is exactly why the security industry may need AI to settle down. 

The challenges of unpredictable AI 

Cycode’s survey on the state of product security for the AI era found that 100% of surveyed organisations already have AI-generated code in production, yet only 19% of security leaders say they have complete visibility into how AI is used across development. A full 65% report that vulnerability counts have increased since adopting AI coding assistants.

This is the dangerous middle phase of a technology boom: universal adoption without universal oversight.

And research from Endor Labs’ shows what that looks like on the ground. AI coding agents confidently recommend dependencies that don’t exist (34% of suggested versions were hallucinated), or that are known to be vulnerable (49%). 

Only one in five dependency versions pulled in by agents was safe without additional controls.

And the connective tissue is equally unpredictable. Their analysis of more than 10,000 Model Context Protocol (MCP) servers (the integration layer powering AI agents) found that 75% are maintained by individuals, 41% have no licence information, and 82% rely on sensitive APIs such as file system access or code execution. Each introduces around three vulnerable dependencies on average.

We’re not suggesting this is malicious. It’s just new – and new tech creates new attack surfaces faster than governance can form.

Why would boring AI save us? 

When technologies stabilise, security becomes dramatically easier. 

The cloud didn’t become defensible until architectures, IAM patterns, and Zero Trust models stabilised. DevOps didn’t scale securely until we standardised around CI/CD pipelines, container images, and SAST/SCA workflows.

AI hasn’t reached that point. Its behaviour is still varied; its supply chain is still chaotic; its failure modes are still undocumented.

Predictable (or boring) AI would look very different: 

  • Standardised governance and approvals
  • Enterprise-grade connectors instead of community projects
  • Known vulnerability classes instead of one-off surprises
  • Clear blueprints for safe usage across teams

As Google Cloud’s 2026 threat forecast warns, threat actors will soon treat AI as a default tool. If defenders are stuck in the novelty phase while attackers industrialise, it’s not a fair fight.

Could 2026 be the turning point? 

At this stage, we don’t really believe AI is going to become boring any time soon. 

But in the coming year, the combination of rising AI-driven attacks, growing regulatory pressure, and the operational realities inside engineering teams may finally push organisations from experimentation into discipline. 

Cycode notes that 97% of organisations plan to consolidate their AppSec tooling this year – that’s a sign that clarity and oversight are becoming priorities again.

Excitement built the AI boom, but predictability will make it sustainable.

Read more on the blog: Will CISOs be under even more pressure in 2026? 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles