When AI writes the code, who takes responsibility?
New research from Veracode and Gartner shows that while AI is accelerating software development, it’s also accelerating risk.
Read More
A recently released dataset from Entrust encompasses a billion identity verifications across over 30 industries in 195 countries, analysed between late 2024 and late 2025. The headline is deceptively calm: average global fraud rates sat at 3.1% in 2024. But that hides big regional swings; 4.3% in the Americas, just 1.4% in EMEA, and 2.1% across APAC.
Threat actors are no longer just stealing a card or phishing a password; they’re attacking every layer of identity. In Entrust’s sample for 2025, national ID cards made up almost 46% of fraudulent document submissions, followed by driver’s licences at 25% and passports at 19%. And they’re doing it with tools that didn’t exist a few years ago.
Deepfakes have become a daily reality. Entrust now links them to around one in five biometric fraud attempts. They show up as AI-generated faces, face swaps, animated selfies – all pumped into identity systems via injection attacks where faked media is fed straight into the verification pipeline, bypassing the camera. The cadence of these injection attacks jumped roughly 40% year on year.
Add device emulation and you get an attack rig that looks exactly like a legitimate smartphone or laptop from the system’s point of view. It’s no surprise that fraud-as-a-service markets are thriving; pre-built scripts and emulation tools can now be sold to anyone, not just dedicated criminals. Entrust has tracked 53 organised fraud rings since 2023, some spanning multiple clients and sectors.
Fraud has become a 24/7 business. Across Entrust’s network, attempts peak between 2 and 4am UTC, when defences in many regions are thin and operations teams are asleep. The working hours of attackers have globalised.
The report’s most useful insight for CISOs is where fraud shows up in the customer lifecycle.
In crypto for example, 67% of attacks now happen at onboarding, fuelled by sign-up bonuses and deposit rewards. Some exchanges are seeing fraud rates of 6.6%, the highest in financial services – compared with 1.5% at traditional banks and 1.6% at payments and merchants.
Where incentives are upfront (crypto platforms, vehicle rental) roughly two-thirds of fraudulent attempts are new account fraud. The playbook is very straightforward: spin up synthetic or stolen identities, grab the bonus or asset, then disappear.
Elsewhere, long-term access is more valuable than a quick win. In payments, 82% of fraud occurs after onboarding; in professional services, it’s 64%, and in digital-first banks it’s 55%. This is where account takeover dominates. In the US alone, ATO losses hit $15.6 billion in 2024.
And if you look beyond finance, the picture is similar. Professional services and recruitment show fraud rates of 4.0% and 3.7% respectively – higher than many retail and telco environments. Fake candidates using synthetic identities and deepfakes are a route into your data centre.
All of this is a security story first and foremost; but it’s becoming a financial one too. We just read Delinea’s 2025 cyber insurance study, surveying over 750 security leaders. It shows how brutally underwriters are connecting identity controls with insurability. Of all the respondents, 99.5% say they had to demonstrate security controls to secure coverage; and identity controls were front and centre. Half needed authorisation and access controls, 45% credential management, 45% session monitoring, 43% identity governance, and 36% MFA.
At the same time, organisations are leaning on their policies harder – 72% filed at least one claim in the last year (up from 62%), and 37% filed multiple claims. A significant 70% saw their premiums rise; while only 2% saw a decrease. And the cover isn’t as generous as many boards assume: only 33% of policies cover lost revenue, and just 45% cover ransomware negotiations or payment.
On top of all this, 45% of organisations say their policy can be voided for lack of security controls, and 35% for misconfiguration. The thing that saves your budget after a breach might evaporate for the very reason you were breached.
Insurers have noticed where the real risk lies. A vast majority (97%) of respondents said identity-related controls influenced their premium or coverage terms at renewal. PAM was the single biggest differentiator, cited by 41%, followed by IGA (38%) and third-party access controls (32%). And that matches the incident data: 46% of claims were driven by identity issues or privileged account compromise.
In other words, if your privileged access story is weak, your insurance story will be, too.
The question, then, is what ‘good’ identity protection actually looks like today.
On the front door, Entrust’s data makes a strong case for robust document and biometric verification at onboarding – with liveness and randomness built in. Their Motion Liveness capability logs a fraud rate of under 0.1%, precisely because it makes it harder to reuse static or pre-recorded content. That’s the baseline you need when deepfakes account for one in five biometric fraud attempts.
Across the lifecycle, the model shifts from ‘check once’ to continuous proof of identity. Multi-factor authentication, device intelligence, anomaly detection and behavioural analytics all help to flag the moment a trusted account starts behaving like a fraud ring.
And for high-value access (admins, automation accounts, third-party providers) Privileged Access Management (PAM) and identity governance become non-negotiable. That’s where most insurers now focus their questionnaires, and where nearly half of last year’s claim incidents originated.
But (to finish on a positive note), there is a business case beyond ‘please don’t hack us’. Entrust’s joint work with DocuSign, for example, shows organisations with robust identity verification save an average of $8 million a year in fraud-related costs.
Here are the practical lessons we’ve gathered from this research.
First, map your exposure. Where are identities verified, re-verified and given extra privilege – at onboarding, at login, or during high-risk actions like payments and data exports?
Second, get fraud, IAM and risk/insurance in the same room. Underwriters are now effectively an external assessor of your identity posture. If the people who buy insurance and the people who run PAM and MFA aren’t comparing notes, you’re leaving money (and possibly cover) on the table.
Third, prioritise controls that do double duty – the ones that both cut fraud and improve insurability: strong document/biometric checks with liveness, continuous authentication and device intelligence, and a serious approach to privileged access and third-party accounts.
AI will be part of the defence, just as it’s part of the attack. Insurers are already offering discounts to 86% of Delinea’s respondents using AI-powered security, but they’re also carving out exclusions for things like model failure and prompt injection. Use AI, but govern it like a potential liability.
Identity fraud in 2026 is industrial, AI-accelerated and increasingly priced into your insurance policy. So protecting identity is now a core part of how your organisation manages financial risk, and earns the right to stay in business after something goes wrong.
Join the newsletter to receive the latest updates in your inbox.
New research from Veracode and Gartner shows that while AI is accelerating software development, it’s also accelerating risk.
Read More
Three recent campaigns (Tsundere, Matrix Push C2, and Sturnus) show attackers shifting command-and-control and data theft into places we treat as harmless UX plumbing.
Read More
CFOs say they’re confident in their organisation’s ability to handle cyber risk – and more than half plan to outsource cybersecurity expertise.
Read More