Attackers are organised like startups now: from lone hackers to operating models
Cybercriminals are now operating like startups: with specialised roles, automation, and scale. What does this mean for CISOs and defenders in 2026?
Read More
The start of the year felt deceptively quiet. Without any headline-grabbing breaches, there’s been a sense that major incidents are still months away.
But that calm is misleading.
In a recent podcast hosted by Dark Reading, three experts unpacked what multiple industry predictions are signalling for 2026 – and why CISOs shouldn’t wait for breach notifications before acting.
The panel brought together Rob Wright (Managing Editor at Dark Reading), David Jones (Editor, Cybersecurity Dive), and Alissa Irei (Senior Editor, TechTarget SearchSecurity), synthesising forecasts from across the security industry into a set of recurring themes.
The message was that attackers are expected to focus less on novel exploits, and more on abusing trust, automation, and complexity already embedded in enterprise environments.
One of the key issues discussed was the rise of agentic AI and autonomous systems.
As organisations deploy AI agents to carry out business workflows with minimal human oversight, those systems increasingly hold meaningful privileges – access to data, systems, and decision-making processes. The panel warned that these agents will become attractive targets; not because the models themselves are flawed, but because attackers can abuse their permissions and integrations.
This reframes AI risk away from abstract model safety and back toward familiar security problems: identity abuse, privilege escalation, and insufficient guardrails – now playing out inside far more powerful systems.
Another clear theme was the continued shift of identity to the centre of cyber risk (we wrote about that in more detail here).
The panel highlighted how non-human identities (including service accounts, APIs, bots, and machine credentials) are expected to outnumber human users by a wide margin. Each represents potential access, and many of them operate with limited visibility or governance.
In that context, identity is fast becoming the primary security boundary – replacing the traditional network perimeter as the main point of enforcement. Zero trust, in this framing, is non-negotiable.
As well as changing systems, AI is influencing human-focused attacks too.
The Dark Reading panel spent time on AI-driven social engineering, including deepfakes and voice cloning – speaking about how these techniques erode trust at scale. Attacks that once required careful targeting can now be personalised and automated, increasing both volume and credibility.
This poses a particular challenge for organisations that still rely on informal trust signals – a familiar voice, a convincing video call, a plausible internal request. As the panel noted, these attacks don’t exploit software vulnerabilities so much as human ones.
If there was one prediction that felt dishearteningly familiar, it was supply chain risk.
The panel agreed that attackers are likely to continue targeting smaller, embedded vendors as a route into much larger environments. As digital supply chains grow more complex, visibility decreases – and accountability becomes harder to enforce.
Rather than a single catastrophic event, the greater risk for 2026 might be a steady accumulation of compromises; each individually minor, but collectively damaging.
A final theme running through the discussion was a shift in defensive thinking.
Rather than assuming breaches can be prevented, the panel emphasised resilience and recovery: how quickly organisations can detect misuse, contain it, and limit downstream impact. Speed matters – not just in detection, but in response and decision-making once access is achieved.
What we took from this discussion isn’t that 2026 introduces entirely new, never-before-seen threats. It’s that existing weaknesses (think identity sprawl, over-trusted automation, and third-party exposure) are being amplified by scale and AI.
The organisations that cope best will treat these early signals as prompts to stress-test assumptions and design security programmes for adversaries who are already operating at speed.
Join the newsletter to receive the latest updates in your inbox.
Cybercriminals are now operating like startups: with specialised roles, automation, and scale. What does this mean for CISOs and defenders in 2026?
Read More
Are mobile network operators spending more time on cybersecurity compliance than real risk reduction? New research explores box-ticking, resilience, and regulation.
Read More
Cyber resilience is everywhere – but what does it really mean? We unpack the buzzword, the data behind it, and what resilient organisations actually do.
Read More