What we’ve learnt about deepfake scams in 2025
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
We’ve been focused on the balance between red and blue this month. This week, we read a new report from PasswordManager.com about the rise of fake job ads – and for red teams, it serves as a new masterclass in psychological manipulation.
The report revealed that six in ten American job seekers encountered fake job postings or scam recruiters during their hunt. Of those who ran into scams, 40% fell for them – with 30% responding to fraudulent recruiters and 26% applying to counterfeit job listings.
That’s a phishing success rate that most red-team operators can only simulate.
And the critical issue here is scale: these aren’t employees failing a security test; they’re everyday people targeted in the open market. Each fake recruiter email or LinkedIn message is a social-engineering pretext built with the same craft red teams deploy during credential-harvesting exercises.
The survey included 1,254 respondents, and it sketches a broad (and expensive) crisis:
The emerging pattern looks like this: attackers are exploiting trust in familiar channels (LinkedIn, email, SMS) and leveraging professional tone and urgency, a lot like corporate phishing campaigns. The psychological levers are identical: authority, opportunity, scarcity.
These stats blur the line between consumer fraud and enterprise risk. If 40% of job seekers can be convinced by a recruiter pretext, what happens when an employee receives an ‘urgent HR update’ or ‘promotion interview invite’ inside the corporate network?
For red teams, job offer scams are real world case studies in emotional payload design. They’re built on believable authority, social validation, and timing that exploits stress or ambition. They show how trust can be engineered without a single exploit.
And for blue teams, the findings redefine the perimeter. HR and talent teams now sit squarely on the frontline of social-engineering defence. So they need to:
What this survey really exposes is how fragile digital trust has become. Attackers just need plausible stories to get in – and they’re getting really good at fabricating those stories.
The red team has effectively gone HR, and the rest of the security stack is still catching up. For defenders, the takeaway here is behavioural: if criminals can convincingly impersonate your organisation’s recruiters, you need to consider what else they could impersonate across every aspect of operations.
Join the newsletter to receive the latest updates in your inbox.
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
As AI tools move from pilots to the fabric of everyday work, the same systems that boost productivity are leaking sensitive data and stretching identity controls past their limits.
Read More
AI is now woven into every layer of modern software development, but most security teams can’t see where or how it’s being used.
Read More