Why identity protection has to level up in 2026
Identity fraud in 2026 is AI-driven, industrial and costly. Learn how deepfakes, fraud rings and new cyber insurance rules are changing identity protection.
Read More
We’ve been focused on the balance between red and blue this month. This week, we read a new report from PasswordManager.com about the rise of fake job ads – and for red teams, it serves as a new masterclass in psychological manipulation.
The report revealed that six in ten American job seekers encountered fake job postings or scam recruiters during their hunt. Of those who ran into scams, 40% fell for them – with 30% responding to fraudulent recruiters and 26% applying to counterfeit job listings.
That’s a phishing success rate that most red-team operators can only simulate.
And the critical issue here is scale: these aren’t employees failing a security test; they’re everyday people targeted in the open market. Each fake recruiter email or LinkedIn message is a social-engineering pretext built with the same craft red teams deploy during credential-harvesting exercises.
The survey included 1,254 respondents, and it sketches a broad (and expensive) crisis:
The emerging pattern looks like this: attackers are exploiting trust in familiar channels (LinkedIn, email, SMS) and leveraging professional tone and urgency, a lot like corporate phishing campaigns. The psychological levers are identical: authority, opportunity, scarcity.
These stats blur the line between consumer fraud and enterprise risk. If 40% of job seekers can be convinced by a recruiter pretext, what happens when an employee receives an ‘urgent HR update’ or ‘promotion interview invite’ inside the corporate network?
For red teams, job offer scams are real world case studies in emotional payload design. They’re built on believable authority, social validation, and timing that exploits stress or ambition. They show how trust can be engineered without a single exploit.
And for blue teams, the findings redefine the perimeter. HR and talent teams now sit squarely on the frontline of social-engineering defence. So they need to:
What this survey really exposes is how fragile digital trust has become. Attackers just need plausible stories to get in – and they’re getting really good at fabricating those stories.
The red team has effectively gone HR, and the rest of the security stack is still catching up. For defenders, the takeaway here is behavioural: if criminals can convincingly impersonate your organisation’s recruiters, you need to consider what else they could impersonate across every aspect of operations.
Join the newsletter to receive the latest updates in your inbox.
Identity fraud in 2026 is AI-driven, industrial and costly. Learn how deepfakes, fraud rings and new cyber insurance rules are changing identity protection.
Read More
New research from Veracode and Gartner shows that while AI is accelerating software development, it’s also accelerating risk.
Read More
Three recent campaigns (Tsundere, Matrix Push C2, and Sturnus) show attackers shifting command-and-control and data theft into places we treat as harmless UX plumbing.
Read More