Why exhibitors keep choosing Black Hat MEA
Four exhibitors explain why Black Hat MEA is the region’s most important meeting point for cybersecurity buyers, partners, and talent.
Read More
Expand your knowledge and build cyber resilience with the global Black Hat MEA community – in your inbox every week.
AI superintelligence. Or more specifically, on why threat actors don’t need it – they can cause serious damage without it.
The headlines are about artificial general intelligence (AGI); when we’ll get it, and the existential damage it could cause.
But the latest edition of the world’s first comprehensive, internationally backed scientific review of general-purpose AI systems focuses on a different concern.
Attackers don’t need AGI.
They don’t need runaway superintelligence or self-directing cyberwar machines, and they don’t even need fully autonomous systems.
They just need tools that make it cheaper and quicker to launch persuasive social engineering attacks. And according to the International AI Safety Report 2026, they already have them.
The report (written with input from over 100 experts and backed by more than 30 countries) is very direct about the current state of things. It explicitly states that AI isn’t running end-to-end cyberattacks in the wild.
But even without fully autonomous attacks, there’s strong evidence that criminal groups and state-sponsored actors are already using AI within cyber operations.
The report highlights that AI systems are particularly effective at vulnerability discovery and code generation. In one premier cybersecurity competition, an AI agent identified 77% of vulnerabilities in real software, placing it in the top 5% of more than 400 (mostly human) teams.
This might not be AGI, but it is acceleration – and acceleration is what changes risk curves.
Elsewhere, the report describes underground marketplaces selling AI-enabled tooling (including AI-generated ransomware), lowering the skill threshold for less sophisticated actors.
Again, this might not be revolutionary, but it’s compressing the time it takes between vulnerability discovery and exploitation. And on the defence side, we feel that compression acutely.

The report describes semi-autonomous attacks causing significant disruption. AI handles the operational tasks, but humans intervene at critical decision points.
Over the last couple of years, security leaders have predicted this hybrid model. It’s a way for attackers to outsource friction – AI models can draft phishing emails and translate them into five languages, and generate evasive variants at machine speed.
The report also cites a source claiming that identity-based attacks rose by 32% in the first half of 2025. It stops short of attributing that rise directly to AI – but it does say these trends fall squarely within AI’s capabilities.
There’s a persistent narrative that AI risk is primarily about future catastrophic scenarios.
But this report reads differently. It suggests the real risk will grow from incremental capability stacking.
It also notes that models still struggle with long, multi-stage autonomous tasks. They can lose operational context, and they struggle to recover from simple errors without human help. That’s a little bit reassuring – but attacks only require enough capability to find one weak link.
As AI continues to help attackers scale their operations through low-skilled, subscription-based capability, the strategic balance will shift gradually – and gradual shifts are difficult to mobilise around.
We think that ultimately this report shows that cybersecurity practitioners are already getting it right. No one in the field is waiting for AGI to pose a real threat – everyone’s aware that the genuine concern right now is a productivity shift inside adversary workflows.
You should expect to see more from semi-autonomous AI threats over the coming year. Attackers will be able to test and iterate exploits faster, and make impersonation more convincing. We might not see a significant jump in sophistication in the short-term, but we will see more throughput.
And this means defensive use of AI isn’t optional anymore. If adversaries are compressing cycle times, blue teams must do the same.
The report also makes the point that it’s difficult to distinguish helpful from harmful use of AI. Overly aggressive safeguards risk blocking legitimate defensive research. This is a classic dual-use problem – one cybersecurity has wrestled with for decades, and will continue to navigate.
The global scientific community has now documented, carefully and conservatively, where frontier AI capabilities stand.
The conclusion is operational.
When attackers have better tools, they can change your threat model. And those tools are already in their hands.
Want a deeper technical breakdown of the report’s cyber findings? Read our analysis on the blog: What cyber practitioners should take from the world’s biggest AI risk review
Join the newsletter to receive the latest updates in your inbox.
Four exhibitors explain why Black Hat MEA is the region’s most important meeting point for cybersecurity buyers, partners, and talent.
Read More
Why Riyadh has become essential for cybersecurity practitioners – from government-backed momentum and diversity to global collaboration and rapid innovation at Black Hat MEA.
Read More
Three startup lessons from founders who exhibited at Black Hat MEA 2025, on market fit, focus, and turning ideas into viable security businesses.
Read More