Is there any point in cybersecurity education in the age of AI?

by Black Hat Middle East and Africa
on
Is there any point in cybersecurity education in the age of AI?

Across universities, conversations that once centred on course choices now drift to a more existential place. People are asking if there’s any point in studying at all. And beneath that: where does education fit in a world where AI can already produce the outputs we’re trained to deliver?

Speaking on the Black Hat MEA podcast, Dr. Rumman Chowdhury (Founder and CEO at Humane Intelligence) said she hears this a lot. A growing number of young people are stepping back from higher education altogether, unsure whether it can still have a meaningful impact on their futures. 

The change here isn’t just because of the presence of AI, but because of what it tells us about learning. For years, education has been structured around producing outputs – essays, reports, answers. Generative AI performs those tasks with ease, and in doing so, it exposes how closely learning has been tied to deliverables rather than understanding.

“The fundamental lesson is maybe we should not teach to produce output, but we should teach to produce knowledge.”

When the shortcut becomes the system

Research emerging from academic settings gives this tension some shape. Chowdhury described one study, which split mathematics students into two groups. One relied heavily on AI tools, while the other followed a traditional path.

Over a couple of years, the differences became significant. Students using AI more freely showed weaker retention, struggled to apply concepts in new contexts, and began to disengage from the purpose of education itself. Learning had been reduced to task completion – and once the task could be outsourced, motivation followed.

A different approach tells a more constructive story. When AI is introduced as part of the learning process (as something to interrogate, test, and challenge), outcomes are far more positive. Students move faster through material while building a stronger grasp of it. They remain active participants in reasoning rather than passive recipients of answers.

Chowdhury describes this as functional literacy. It centres on working alongside AI, shaping and evaluating its outputs, and understanding where it fits within a broader process of thinking.

And in cybersecurity, this distinction carries real weight. Tools can catch patterns, generate summaries, and automate workflows. But meaning only emerges through context and judgement.

Expertise in an AI-assisted world

The idea that expertise might fade in relevance has gained traction in recent years. With the right prompt, surely anyone can access expert-level output.

But evidence from the workplace suggests more nuance. Chowdhury shared another study, this one conducted at Procter & Gamble, which explored how people of varying experience levels performed with AI tools. Those without deep expertise did improve their baseline performance – but the ones with expertise extended far beyond it.

The difference lies in discernment. Experienced practitioners question inconsistencies and understand the broader environment in which decisions sit. AI accelerates their work because it operates within a framework they already possess.

For cybersecurity professionals, that framework includes threat modelling, risk prioritisation, and an understanding of how systems behave under pressure. These are built over time, through exposure and study – they can’t be generated on demand.

Rethinking what we measure

When an organisation or individual adopts AI, they tend to focus on speed. Productivity is the key metric, and generative AI pushes it up. But increased productivity doesn’t necessarily equate to quality – because quality depends on the ability to distinguish between plausible and correct.

So in cybersecurity, education has to take on a different role. It needs to become the process through which people learn to evaluate, question, and synthesise. These capabilities support decisions in uncertain environments, where machine evaluations are narrow and the consequences of getting things wrong are significant.

Chowdhury believes the presence of AI has brought long-standing tensions into sharper focus. Many professionals already work outside the narrow boundaries of the subjects they studied at university, and their effectiveness comes from a blend of skills that they’ve picked up throughout their careers (and lives) – including the ability to connect ideas across domains.

Chowdhury’s own work draws on political science, governance, and technical understanding. Conversations about AI systems rely as much on knowledge of institutions and incentives as they do on models and code.

“I’m leaning on my political science education where I learned about institutional development, I learned about incentive models, I learned about power structures… but I need all of it. I need to have understood why the constitution was written the way it was written, and also how LLMs are built, to have a reasonable conversation to tell a company how they should structure their AI test and evaluation models.” 

When you look at it this way, education becomes a foundation rather than a direct pipeline. It equips individuals to navigate complexity and adapt across roles – and to build connections between seemingly unrelated areas. 

Cybersecurity sits squarely within this dynamic. It requires technical depth, alongside an understanding of human behaviour, organisational structures, and evolving risk landscapes.

We need to build cybersecurity education around analysis and reasoning, alongside technical skills. We need to integrate AI into training with a focus on evaluation and interpretation. And we have to trust learning as a continuous, lifelong process. 

AI has changed the surface of work. But the underlying need for expertise, judgement, and critical thinking is more important than ever. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Has AI hype reached its peak?

Has AI hype reached its peak?

AI hype is peaking. Dr Rumman Chowdhury explains why use cases remain limited, AGI definitions are shifting, and investors are starting to question the narrative.

Read More
AI is the new insider

AI is the new insider

AI-driven attacks are accelerating identity compromise. New 2026 threat data shows why identity and access management now defines cybersecurity resilience.

Read More