Is there any point in cybersecurity education in the age of AI?
Is cybersecurity education still relevant in the age of AI? Dr Rumman Chowdhury explores how AI is reshaping learning, expertise, and the skills security professionals need.
Read More
A few weeks before she spoke to us on the Black Hat MEA podcast, Dr. Rumman Chowdhury (Founder and CEO at Humane Intelligence) spoke to a VC who told her he avoids investing in companies that help others adopt AI. His reasoning was that AI will soon make every decision for us anyway, so intermediary tools will lose their value.
Chowdhury didn’t say it out loud to him, but told us “the only thought that occurred in my head was, ‘Sir, if you think that is true then why are you on a Zoom call with me.’ Because if you’re imagining a world in which AI is making all the decisions, you know… hug your children.”
The remark captures a broader mood in the market. Confidence in AI’s trajectory has moved beyond optimism into certainty, where future outcomes are treated as inevitable.
And if we take a step back, we can see that this is often how hype matures. Early excitement builds into conviction, and conviction begins to influence investment, strategy, and expectations.
AI’s current trajectory echoes earlier waves of technological enthusiasm. Chowdhury pointed out that cryptocurrency, NFTs, and the metaverse each followed a similar pattern: rapid capital inflow, expansive promises, and a gradual recalibration as practical realities emerged.
With AI, the scale is larger and the pace is quicker. Within a few years, hundreds of billions have been committed, and generative models have entered mainstream workflows.
But still, clarity around sustainable, high-value use cases remains limited.
Inside organisations, AI is finding traction in specific, contained areas. Document processing, internal knowledge systems, and tightly scoped automation represent the most consistent deployments. And they deliver efficiency and convenience – streamlining operations and reducing manual effort.
But as Chowdhury put it:
“This is not the trillion-dollar explosion that all these investors were promised, and now we’re seeing investors get antsy.”
Alongside practical questions about use cases, the language surrounding AI keeps changing. The concept of artificial general intelligence is a strong example of this.
Initially framed as machines capable of human-like reasoning, AGI has taken on more operational definitions recently. In some contexts, it now aligns with the automation of economically valuable tasks. In others, it’s tied to revenue milestones.
These changes in definition reflect the influence of commercial pressures. As companies work to translate capability into business outcomes, those definitions begin to align more closely with measurable performance. Meanwhile, the average person on the street still assumes that AGI will look like it does in the movies.
So it’s a concept that carries different meanings depending on perspective, with technical ambition and financial targets intertwined.
Despite rapid progress, current systems continue to operate within clear limitations. They generate language fluently, assist with coding, and support analysis across a range of domains – all of which is useful.
They also struggle with consistency, reasoning depth, and edge cases that require contextual understanding.
Chowdhury’s comparison – placing today’s AI closer to a house cat than a human – offers a vivid way of framing that gap. It highlights how much of human cognition remains difficult to replicate, even with significant advances in model design and scale.
Using autonomous driving as an example, she pointed out:
“The average 16-year-old can do in two weeks what it took 20 years and hundreds of billions of dollars to have a car do poorly. So while Waymo is currently considered a success, you have to think of all the sunk costs and capital put into it, and the many many decades and PhDs and brilliant minds that were put behind making a thing that can barely mimic what your kid can do in one week or lessons. When you think about it that way; how hard it is to teach a machine to do the things that you and I do without even thinking about it; it just shows you how much work there is to be done.”
Several indicators suggest a shift in how AI is being viewed. Investment conversations are becoming more measured; use cases are narrowing towards areas with demonstrable value; and narratives around capability are evolving to reflect operational realities.
This phase often marks the transition from expansion to consolidation. Expectations align more closely with outcomes, and the focus moves towards building sustainable applications.
As we navigate (and try to make sense of) the next phase of AI adoption and perception, we need to:
AI continues to develop at pace. But the narrative surrounding it is beginning to settle, creating space for a more grounded understanding of where it delivers value – and where there’s still a lot of work to be done.
Join the newsletter to receive the latest updates in your inbox.
Is cybersecurity education still relevant in the age of AI? Dr Rumman Chowdhury explores how AI is reshaping learning, expertise, and the skills security professionals need.
Read More
Ransomware is evolving beyond encryption. 2026 threat reports reveal the rise of data-leak extortion, pre-encryption exfiltration and a growing ecosystem of cybercriminal groups targeting sensitive data.
Read More
AI-driven attacks are accelerating identity compromise. New 2026 threat data shows why identity and access management now defines cybersecurity resilience.
Read More