The most underrated leadership skill in cybersecurity
Why great cybersecurity leaders don’t aim to be flawless. Jerich Beason on trust, transparency and leading well as a CISO.
Read More
AI in the movies.
Because films tend to reflect what feels plausible at any given moment in computing history. And as AI has moved from symbolic logic to data-driven systems, movie portrayals have followed.
Let’s start in 1968, with 2001: A Space Odyssey. HAL 9000 wasn’t a learning system; he didn’t adapt or generalise. He executed instructions inside a closed environment with absolute confidence. When those instructions clashed with secret mission orders, it led to malfunction.
From a modern engineering perspective, HAL looks less like AI and more like a lesson in specification. Its behaviour emerges from conflicting objectives and incomplete disclosure – problems familiar to anyone who has worked on complex systems long enough. There was no uncertainty in HAL. No probabilistic output. Just logic carried to its limits.
A similar dynamic appears in the 1983 film WarGames. The computer can simulate endlessly, but it can’t contextualise. It optimises perfectly within the wrong frame. This is AI as an optimisation engine: powerful but brittle, and bounded by its goals.
In these portrayals, AI was deterministic. And failure was linked to design flaws, not data problems.
By the mid-1980s, cinematic AI grew larger and more abstract. The Terminator introduced Skynet as a system that controlled infrastructure, manufacturing, and weapons – a single intelligence operating at planetary scale.
It was technically implausible, but conceptually revealing.
This was the moment when AI stopped being software and became infrastructure. The emphasis shifted from logic to scale: once intelligence reaches a certain threshold, control becomes total. There was no interest in training, evaluation, or maintenance. AI was assumed to be internally coherent and externally dominant.
For modern technologists, this era reads more like mythology than engineering – a reflection of how little visibility there was into real computational systems at scale.
The most interesting shift came in the 2010s, when films started treating AI as something built, tested, and iterated.
In Ex Machina, intelligence was a process. Robotic Ava was trained, evaluated, constrained, and misjudged. Her creator didn’t fail because he couldn’t control her; he failed because his testing regime was shallow and his assumptions were wrong.
This depiction is much closer to the AI we know today. Intelligence emerges through feedback loops, and capability is shaped by what is measured. Risk appears in the gaps between confidence and validation.
Around the same time, Her explored AI as an adaptive interface. Samantha evolved through interaction with the human protagonist, rather than being coded into a rigid existence. Her intelligence was relational and emergent, closer to how large-scale models actually behave than anything cinema had shown before.

Recent films reflect the latest evolution: AI as a distributed system rather than a single machine.
In Mission: Impossible – Dead Reckoning Part One, the threat is an AI that operates across networks, influencing information, prediction, and trust. There’s no central core to destroy because there is no single system to point at.
This might be the closest cinema has come to depicting contemporary AI reality: probabilistic systems embedded across workflows, influencing outcomes indirectly, and scaling both capability and error at speed.
Over six decades, movie AI has moved from rule-based logic, to centralised control, to embodied intelligence, to distributed systems. That’s because cinema has followed the tech.
And for those of us working in cybersecurity now, in the new era of AI and automation, it’s a familiar pattern. Just like cinema, threat actors are following the tech – so we have to follow it a little bit better than they do.
Head to the comments section on LinkedIn and tell us about the AI movies that have stuck with you.
Join the newsletter to receive the latest updates in your inbox.
Why great cybersecurity leaders don’t aim to be flawless. Jerich Beason on trust, transparency and leading well as a CISO.
Read More
Why do cybercriminals target holidays? New data reveals ransomware spikes, holiday phishing surges, and brand impersonation attacks when organisations are least prepared.
Read More
Boards are asking new cybersecurity questions about risk, resilience, and accountability. Here’s how CISOs can respond with clarity – and without overpromising.
Read More