Back to being human
Human risk remains one of the biggest cybersecurity threats in 2026. New data shows why people, not just AI, are still being exploited – and what CISOs must do next.
Read More
You’re scrolling through social media when you spot a perfectly produced video of your favourite celebrity promoting an investment app or a giveaway. It feels familiar; trustworthy. And there’s a good chance it’s entirely fake.
According to McAfee Labs’ 2025 research, scammers have moved far beyond grainy YouTube fakes. The firm’s ‘Most Dangerous Celebrity’ ranking places Taylor Swift at number one in the US and globally, ahead of public figures including Scarlett Johansson and Jenna Ortega.
In a survey of 8,600 global respondents, 72% of Americans say they’ve seen fake celebrity or influencer endorsements, and 39% admit to having clicked on one. One in ten lost money or personal data, with an average loss of USD $525.
And on the flipside, only 29% of people feel confident they can identify a fake; while 21% openly say they have very low confidence.
We just read Bitdefender’s 2025 survey on consumer cybersecurity, and it adds scale to this picture. Of the consumers surveyed, 37% say their biggest worry about AI is its use in scams like deepfakes or voice clones.
Nearly 70% of people say they encountered a scam in the past year; and about one in seven say they were actually victimised.
Social media has overtaken email as the top vector (34%), and younger users (who post and share more openly) are roughly twice as likely to fall for scams (20% compared with 9.7%).
Add in weak habits (37% write down passwords; 48% accept all cookies without review) and you’ve got an environment where deepfake scams can thrive.
Scammers don’t need to invent pressure tactics when they can just borrow trust. They use AI to clone a celebrity’s face or voice, position the message as ‘urgent’ or ‘exclusive’, and push it via social media ads, streaming sites, or chat-apps. McAfee calls this the ‘celebrity hijack’ formula.
Then the social channel and mobile device become the delivery system. As one of the Bitdefender report’s case studies points out, scammers will ‘clone a son’s voice’ to call a parent asking for emergency money. With younger users primed to click and older users more trusting, the attack surface spans generations.
In spite of more advanced tech than ever, we’re incredibly vulnerable. We already noted that only 29% of people feel confident identifying a fake; that’s a critical warning sign. And at the same time, habits uncovered by Bitdefender (poor password discipline, unprotected phones) tell us the weak link is still very much human behaviour.
Organisations often treat deepfakes as a brand or PR risk rather than a systemic fraud risk. But the data shows that’s a mistake: large numbers of consumers are exposed, and the risks are financial, reputational, and psychological.
If you’re in a leadership position, you need to step away from the idea that deepfake scams are an individual consumer problem, and start asking tough questions:
And then work with these three strategic levers:
What began as viral deepfake jokes has matured into a scalable fraud economy. The question to ask now is how resilient your organisation would be in the face of a deepfake scam – because it’s time to treat them as real threats.
Join the newsletter to receive the latest updates in your inbox.
Human risk remains one of the biggest cybersecurity threats in 2026. New data shows why people, not just AI, are still being exploited – and what CISOs must do next.
Read More
Following a year of piloting AI, organisations in 2026 will focus on whether they can survive its side effects. Find out what other CISOs are prioritising right now – and share your perspective.
Read More
Five standout quotes from Black Hat MEA speakers in 2025, and what they reveal about cybersecurity’s year ahead.
Read More