Why CISOs are saying MFA isn’t enough
Security leaders are losing confidence in traditional multi-factor authentication (MFA). Find out why zero-trust and AI-driven identity are pushing CISOs beyond passwords and codes.
Read More
We’ve been focused on the balance between red and blue this month. This week, we read a new report from PasswordManager.com about the rise of fake job ads – and for red teams, it serves as a new masterclass in psychological manipulation.
The report revealed that six in ten American job seekers encountered fake job postings or scam recruiters during their hunt. Of those who ran into scams, 40% fell for them – with 30% responding to fraudulent recruiters and 26% applying to counterfeit job listings.
That’s a phishing success rate that most red-team operators can only simulate.
And the critical issue here is scale: these aren’t employees failing a security test; they’re everyday people targeted in the open market. Each fake recruiter email or LinkedIn message is a social-engineering pretext built with the same craft red teams deploy during credential-harvesting exercises.
The survey included 1,254 respondents, and it sketches a broad (and expensive) crisis:
The emerging pattern looks like this: attackers are exploiting trust in familiar channels (LinkedIn, email, SMS) and leveraging professional tone and urgency, a lot like corporate phishing campaigns. The psychological levers are identical: authority, opportunity, scarcity.
These stats blur the line between consumer fraud and enterprise risk. If 40% of job seekers can be convinced by a recruiter pretext, what happens when an employee receives an ‘urgent HR update’ or ‘promotion interview invite’ inside the corporate network?
For red teams, job offer scams are real world case studies in emotional payload design. They’re built on believable authority, social validation, and timing that exploits stress or ambition. They show how trust can be engineered without a single exploit.
And for blue teams, the findings redefine the perimeter. HR and talent teams now sit squarely on the frontline of social-engineering defence. So they need to:
What this survey really exposes is how fragile digital trust has become. Attackers just need plausible stories to get in – and they’re getting really good at fabricating those stories.
The red team has effectively gone HR, and the rest of the security stack is still catching up. For defenders, the takeaway here is behavioural: if criminals can convincingly impersonate your organisation’s recruiters, you need to consider what else they could impersonate across every aspect of operations.
Join the newsletter to receive the latest updates in your inbox.
Security leaders are losing confidence in traditional multi-factor authentication (MFA). Find out why zero-trust and AI-driven identity are pushing CISOs beyond passwords and codes.
Read More
We echo an Irish playwright’s warning against false knowledge in this philosophical dive into the power of purple teaming, as a bridge between red and blue.
Read More
Researchers at LayerX reveal a ChatGPT Atlas flaw that lets attackers inject malicious ‘memories’, exposing new AI-browser persistence risks.
Read More