AI arrived in the enterprise wearing a productivity badge, promising copilots and retrieval tools that can pull data and complete tasks. In practice, these systems behave like a new class of user – always on, broadly connected, and hungry for context.
And security teams can feel the ground shifting under their feet.
Thales captures the mood in a single statistic: 70% of respondents cite the rate of change in the AI ecosystem as a top AI security risk. The challenge sits somewhere between governance and physics. Controls depend on stability, while AI ecosystems depend on rapid iteration.
Boards are responding. Thales reports 30% of organisations now have a dedicated AI security budget (up from 20%), while 53% still pay for AI security from existing budgets. Either way, funding follows friction: organisations recognise AI security as a distinct workload, even when they finance it through the same pot.
Deepfakes move from novelty to operational risk
The social engineering layer has changed shape. Deepfakes and AI-generated misinformation used to be treated as reputational risks that lived mostly outside the SOC. But the numbers suggest they now belong in incident response playbooks.
Thales reports that 59% of organisations have seen deepfake attacks. The same report notes 48% have experienced reputational damage linked to AI-generated misinformation, and that AI-generated misinformation and deepfakes saw one of the biggest increases in attack types.
It’s important to remember, too, that reputational harm doesn’t come alone. It tends to show up with downstream operational consequences – think employee trust, customer confidence, media scrutiny, and regulatory attention. That makes this kind of attack a lever that threat actors can pull further down the line.
Attackers follow trust, and trust lives in identity
While defenders debate model safety, attackers keep playing the simpler game: access. A convincing message, a captured token, an abused OAuth grant, a helpdesk call that yields a reset. Modern intrusions thrive in the places where legitimate work already happens.
CrowdStrike’s 2026 reporting leans hard into that reality. It describes a world where 82% of detections are malware-free, with adversaries moving through valid credentials, trusted identity flows and approved SaaS integrations.
This is important, because ‘malware-free’ frequently looks like ‘normal’. Security teams then rely on identity telemetry, SaaS audit trails, conditional access signals, token patterns and behavioural anomalies to spot intent. The perimeter story fades. The identity story comes into focus.
Speed multiplies the difficulty. CrowdStrike reports an average eCrime breakout time of 29 minutes in 2025, with a fastest observed breakout of 27 seconds, and an example where data exfiltration began within four minutes of initial access.
Inevitably, those timelines shrink the margin for indecision. They reward organisations that treat identity as infrastructure, and they punish organisations that treat it as an administrative service.
When prompts influence outcomes, access becomes the product
AI introduces a new kind of user influence – inputs that shape what systems do next. CrowdStrike describes adversaries using prompt injection against GenAI tools across 90+ organisations to generate harmful outputs, including content that supported credential theft and cryptocurrency theft.
The important part here is practical rather than philosophical. Prompt injection sits inside a trust relationship. It relies on a system being allowed to do useful things with sensitive context. Once the system can query internal data or trigger actions, the input channel becomes an operational security concern.
Recent research from IBM adds another angle: credentials and access artefacts for AI services show up in the same places as everything else. IBM reports exposure of over 300,000 ChatGPT credentials associated with infostealer activity.
Even when individual credentials get rotated, the risk holds. AI usage expands the set of identities that create vulnerabilities, and infostealers keep harvesting whatever users store or sync.
Here are the lessons you can take from all this research right now:
- Treat AI agents like privileged identities. Define what they can access, where they can act, and what they must never see.
- Instrument identity and SaaS activity for intent. ‘Malware-free’ activity still leaves traces in logins, tokens, OAuth grants, and unusual platform usage.
- Design for minutes, not days. Breakout and exfiltration timelines require rapid containment, strong segmentation and rehearsed playbooks.
AI expands capability by expanding access. Organisations that harden identity and constrain what ‘helpful systems’ can do will keep AI useful – without letting it become a high-speed insider.