What we’ve learnt about deepfake scams in 2025
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
In a way, 2025 has been the year AI disappeared into the furniture. We wrote about shadow AI last week. AI has quickly become part of how work gets done across industries, and as a result, adoption is now being driven largely by non-technical teams – which brings new risk into the equation.
The latest Q3 analysis from Harmonic Security looked at over three million prompts and file uploads flowing through 300 generative and AI-embedded tools between July and September 2025. Of all uploaded files, 26.38% contained sensitive information (up from 22% in the previous quarter).
On average, organisations were using 27 distinct AI tools, and uploading about 4.427 GB of data per quarter to GenAI platforms, compared with 1.32 GB in Q2. At the same time, the number of new GenAI tools employees introduced dropped from 23 to 11.
To us, this suggests that the exploration phase is over. People have picked their tools and woven them into their daily routines.
This is no neat AI programme, introduced by leadership and then picked up by teams. New research from Moveworks (in partnership with Wakefield Research) tells a story of employees taking the adoption helm.
In a survey of 200 US IT executives at billion dollar companies already using AI agents, 78% say agentic AI has led to a significant or total transformation of their operations. For 34%, the company is ‘completely transformed’; 45% say large parts of the business have changed; and only 21% report transformation confined to specific areas.
Importantly, 91% of those executives say non-technical employees are playing a larger role in driving agentic AI projects than in previous tech waves. Over three-quarters have seen successful AI initiatives come from non-leaders or support staff, and for 37% that’s happened multiple times.
So employees aren’t waiting for a steering committee. They’re going ahead and using AI to fix whatever is slowing them down. The same study finds 89% of executives believe workers are open to AI agents in their workflows, while 65% say employees prefer tools that integrate into existing processes rather than rip them up.
In effect, it’s a bottom-up AI revolution, running inside organisations that are still trying to manage risk from the top down.
If we go back into Harmonic’s dataset, the mix of sensitive information running through GenAI is telling.
In Q3, 57.25% of sensitive uploads involved business and legal data – everything from legal drafts and deal documents to financial projections and investment portfolios. Technical data made up another 25%, heavily skewed towards proprietary source code (65%), access keys and credentials (24%), and security incident reports (11%).
PII is present but not dominant: 15% of sensitive exposures fell into PII categories such as employee data and payroll. Customer data is a smaller slice at 3%; but even there, 58% of exposures involved credit card or billing data. The rest was split across customer profiles, authentication data and payment transactions.
Then there’s the question of where this data is actually going. Harmonic reports that 21.81% of sensitive data went into tools that train on user inputs, including free versions of ChatGPT, Gemini, Claude and Meta AI. And 11.84% of all sensitive exposures occurred via personal or free accounts.
That’s a rich business context being pasted into accounts the company doesn’t own; inside systems that retain history and often train on whatever they receive.
Harmonic notes that most enterprises have now published AI usage policies. But policy alone hasn’t changed outcomes. Awareness might be high, but enforcement is low.
While data flows into AI tools, identity is being stressed behind the scenes.
Rubrik Zero Labs’ report on the current identity crisis is based on insights from 1,625 IT and security leaders globally. It argues that as cloud, remote work and agentic AI dissolve traditional boundaries, identity has become the primary attack surface.
It’s definitely not a marginal concern. A worrying 90% of respondents agree that identity-based attacks represent the single largest threat to their organisations.
And Rubrik highlights non-human identities (NHIs) – API tokens, certificates, containers, automation tools, service accounts and AI agents. Citing external research, they note that NHIs now outnumber human users by 82 to 1, dramatically expanding the attack surface as agentic AI spreads.
These agents are already in the core of our businesses and our lives. Of the organisations surveyed, 89% have fully or partially incorporated AI agents into their identity infrastructure, and 58% of respondents estimate that over the next year, half or more of the cyberattacks they face will be driven by agentic AI.
Rubrik’s conclusion is that identity should be treated as the primary control plane for security decisions, with resilience (defined as the ability to restore identity infrastructure to a clean state quickly) on equal footing with prevention and detection.
If we layer these three pieces of research on top of each other, then our risk map for 2026 starts to sharpen.
From Harmonic, we can see that employees are feeding substantial volumes of sensitive business and legal content into GenAI tools, and that a noticeable slice of this is going to free, training-by-default platforms and personal accounts.
From Moveworks, we see that agentic AI is already transforming operations; that non-technical staff are driving a lot of that change; and that executives are measuring success in terms of increased output (57%), process reinvention (53%), new capabilities (47%) and staff time saved (43%) – not just cost savings.
And from Rubrik, we get the warning that identity is now the main route in and across the environment; that NHIs and AI agents massively outnumber humans; and that attackers are increasingly likely to use those identities to ‘live off the land’ rather than smash the perimeter.
In short:
In that world, a single compromised identity (whether it’s human or not) can hand over the keys to a landscape of chat histories, uploaded documents, embedded assistants and automated workflows.
None of the three reports argue for banning AI. In fact, Moveworks shows that 96% of executives would rather have a useful agentic AI tool than the latest, fastest large language model, and the ROI numbers suggest they’re already seeing real value.
The common thread that runs through all of this research is about visibility and management:
By the time boards sit down with 2026’s risk reports, they’ll be asking themselves if they treated identity and sensitive data as first class risks in an AI-saturated environment – or if they just assumed someone else had that covered.
Be one of the organisations that can confidently talk about AI in terms of resilience, not regret.
Join the newsletter to receive the latest updates in your inbox.
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
AI is now woven into every layer of modern software development, but most security teams can’t see where or how it’s being used.
Read More
New data shows the CISO role is already under strain, and heading into an even tougher year.
Read More