Cybercrime that runs like a multinational corporation
Cybercrime in 2026 is organised like a business. New data reveals how attackers use automation, AI and structured operations to scale global threat campaigns.
Read More
We’ve been focused on the balance between red and blue this month. This week, we read a new report from PasswordManager.com about the rise of fake job ads – and for red teams, it serves as a new masterclass in psychological manipulation.
The report revealed that six in ten American job seekers encountered fake job postings or scam recruiters during their hunt. Of those who ran into scams, 40% fell for them – with 30% responding to fraudulent recruiters and 26% applying to counterfeit job listings.
That’s a phishing success rate that most red-team operators can only simulate.
And the critical issue here is scale: these aren’t employees failing a security test; they’re everyday people targeted in the open market. Each fake recruiter email or LinkedIn message is a social-engineering pretext built with the same craft red teams deploy during credential-harvesting exercises.
The survey included 1,254 respondents, and it sketches a broad (and expensive) crisis:
The emerging pattern looks like this: attackers are exploiting trust in familiar channels (LinkedIn, email, SMS) and leveraging professional tone and urgency, a lot like corporate phishing campaigns. The psychological levers are identical: authority, opportunity, scarcity.
These stats blur the line between consumer fraud and enterprise risk. If 40% of job seekers can be convinced by a recruiter pretext, what happens when an employee receives an ‘urgent HR update’ or ‘promotion interview invite’ inside the corporate network?
For red teams, job offer scams are real world case studies in emotional payload design. They’re built on believable authority, social validation, and timing that exploits stress or ambition. They show how trust can be engineered without a single exploit.
And for blue teams, the findings redefine the perimeter. HR and talent teams now sit squarely on the frontline of social-engineering defence. So they need to:
What this survey really exposes is how fragile digital trust has become. Attackers just need plausible stories to get in – and they’re getting really good at fabricating those stories.
The red team has effectively gone HR, and the rest of the security stack is still catching up. For defenders, the takeaway here is behavioural: if criminals can convincingly impersonate your organisation’s recruiters, you need to consider what else they could impersonate across every aspect of operations.
Join the newsletter to receive the latest updates in your inbox.
Cybercrime in 2026 is organised like a business. New data reveals how attackers use automation, AI and structured operations to scale global threat campaigns.
Read More
Riskiest connected devices in 2026: routers, IoT, OT and healthcare systems top the list as vulnerabilities, patch gaps and exposure increase.
Read More
Shadow AI is becoming the default enterprise architecture as AI adoption outpaces governance. Here’s what the data says about visibility, control, data exposure and risk.
Read More