Got conference fatigue? BHMEA can help.
Black Hat MEA is redefining the cybersecurity conference experience. CISOs share why it feels more intentional, collaborative and energising – and how that drives resilience.
Read More
Explore our weekly delivery of inspiration, insights, and exclusive interviews from the global BHMEA community of cybersecurity leaders.
Keep up with our weekly newsletters on LinkedIn — subscribe here and listen to The Black Hat Files here.
Insights, inspiration and exclusive interviews from the global Black Hat MEA community – in your inbox every week.
The ROI of enterprise tech pilots.
There are pilots running across teams, demos landing well with leadership, and a steady stream of new tools promising to accelerate adoption. If you’re an outsider looking in, you get a sense of momentum.
Inside, progress is actually very uneven. Projects move forward and then stall. Early results generate interest, but scaling remains just out of reach.
On the podcast, Dr. Rumman Chowdhury (Founder and CEO at Humane Intelligence) mentioned a 2025 MIT-affiliated report on the generative AI divide. The report puts a number on this pattern: around 95% of enterprise generative AI pilots fail to deliver measurable ROI, with most never reaching production scale.
In the face of this, the instinct is to look for technical explanations – but the reality isn’t actually about the tech.
Most companies have already crossed the experimentation threshold. AI tools are being tested across functions, from customer support to internal operations. Proofs of concept demonstrate clear efficiencies, and tasks that once took hours now take minutes.
Great.
But those gains tend to stay contained.
As Dr Rumman Chowdhury notes:
“The number one use case for generative AI models has been back-office document automation. Processing, fine-tuning small models, so that people can ask questions.”
Beyond those predictable, relatively straightforward use cases, complexity increases – systems have to interact with each other and data has to move across boundaries. And outputs influence decisions that carry real consequences.
At that point, it’s not so much about capability as it is about confidence.
“The reality is most companies have not rolled out and scaled up generative AI tools because of the significant amount of risk that’s involved.”
If we focus specifically on generative AI, we know it introduces new layers of consideration.
We have to think about:
These questions extend into governance, compliance, and accountability. And organisations approach this carefully. Moving from pilot to production requires clarity on how risk is defined and managed over time.
Chowdhury emphasised how success is often framed:
“We think about how AI’s being measured in terms of impact on the workforce, it’s often measured in terms of productivity – but guess what, yes, machines will always produce faster than people… that doesn’t mean it’s any good.”
Speed is a neat, visible metric. Quality depends on context, interpretation, and judgement.
In cybersecurity, those qualities drive outcomes. Decisions based on incomplete or misleading outputs introduce new vulnerabilities. Trust in AI systems develops through understanding how they perform under real conditions, rather than ideal ones.
Another pattern runs through these stalled pilots: the importance of expertise.
AI tools raise baseline performance. But their impact deepens when used by individuals who bring domain knowledge and experience:
“The discernment, being able to understand what is good or bad, the context – that only comes from people, it doesn’t come from AI.”
Because experts provide structure. They recognise when outputs align with reality and when they require scrutiny. They understand how results fit into broader systems and decisions.
This dynamic shapes how AI scales. Access to tools is only one part of the equation – the ability to evaluate and act on outputs is the other, still often underestimated, part.

The next phase of AI adoption is beginning to take shape, and it centres on creating the conditions for scaling existing pilots.
That includes:
These elements form the infrastructure around the technology. They determine whether a pilot remains isolated or becomes embedded.
The current wave of AI experimentation has shown what the technology can do. The next challenge is to build organisations that can support it.
Chowdhury said:
“Trust is not owed, it is earned.”
And that applies as much to AI systems as it does to the companies deploying them. Models have advanced quickly, but the systems we need in order to use them safely and responsibly are coming to life more gradually.
Read the blog: Is there any point in cybersecurity education in the age of AI?
Join the newsletter to receive the latest updates in your inbox.
Black Hat MEA is redefining the cybersecurity conference experience. CISOs share why it feels more intentional, collaborative and energising – and how that drives resilience.
Read More
Ransomware now encrypts in three hours. OT attackers are mapping control loops. The common thread is discipline – and preparation.
Read More
If attackers collaborate daily, why don’t defenders? Stefan Baldus (CISO at Hugo Boss) reflects on global cyber cooperation at Black Hat MEA.
Read More