What we’ve learnt about deepfake scams in 2025

by Black Hat Middle East and Africa
on
What we’ve learnt about deepfake scams in 2025

You’re scrolling through social media when you spot a perfectly produced video of your favourite celebrity promoting an investment app or a giveaway. It feels familiar; trustworthy. And there’s a good chance it’s entirely fake. 

According to McAfee Labs’ 2025 research, scammers have moved far beyond grainy YouTube fakes. The firm’s ‘Most Dangerous Celebrity’ ranking places Taylor Swift at number one in the US and globally, ahead of public figures including Scarlett Johansson and Jenna Ortega.

In a survey of 8,600 global respondents, 72% of Americans say they’ve seen fake celebrity or influencer endorsements, and 39% admit to having clicked on one. One in ten lost money or personal data, with an average loss of USD $525.

And on the flipside, only 29% of people feel confident they can identify a fake; while 21% openly say they have very low confidence.

The consumer scam backdrop is bigger than you think 

We just read Bitdefender’s 2025 survey on consumer cybersecurity, and it adds scale to this picture. Of the consumers surveyed, 37% say their biggest worry about AI is its use in scams like deepfakes or voice clones.

Nearly 70% of people say they encountered a scam in the past year; and about one in seven say they were actually victimised.

Social media has overtaken email as the top vector (34%), and younger users (who post and share more openly) are roughly twice as likely to fall for scams (20% compared with 9.7%).

Add in weak habits (37% write down passwords; 48% accept all cookies without review) and you’ve got an environment where deepfake scams can thrive.

How these scams actually play out

Scammers don’t need to invent pressure tactics when they can just borrow trust. They use AI to clone a celebrity’s face or voice, position the message as ‘urgent’ or ‘exclusive’, and push it via social media ads, streaming sites, or chat-apps. McAfee calls this the ‘celebrity hijack’ formula.

Then the social channel and mobile device become the delivery system. As one of the Bitdefender report’s case studies points out, scammers will ‘clone a son’s voice’ to call a parent asking for emergency money. With younger users primed to click and older users more trusting, the attack surface spans generations.

Why are we failing to stop deepfake scams? 

In spite of more advanced tech than ever, we’re incredibly vulnerable. We already noted that only 29% of people feel confident identifying a fake; that’s a critical warning sign. And at the same time, habits uncovered by Bitdefender (poor password discipline, unprotected phones) tell us the weak link is still very much human behaviour. 

Organisations often treat deepfakes as a brand or PR risk rather than a systemic fraud risk. But the data shows that’s a mistake: large numbers of consumers are exposed, and the risks are financial, reputational, and psychological. 

What boards, CISOs and investors should watch in 2026

If you’re in a leadership position, you need to step away from the idea that deepfake scams are an individual consumer problem, and start asking tough questions: 

  • What if a deepfake uses our CEO’s voice to approve a transfer?
  • What if a cloned influencer hijacks our brand in 60 seconds of video and sends customers to a fake site?
  • What if voice clone fraud hits supply chain payments or onboarding?

And then work with these three strategic levers: 

  • Detect deception at scale. Deploy tools that screen for cloned voices, manipulated faces, influencer brand hijack attempts.
  • Secure your voice and identity surface. Finance and HR functions are obvious targets – but so are social and community teams, brand teams and any public-facing account.
  • Upgrade culture and training. If less than 30% of people feel confident spotting fakes, you need to push training, awareness and simulation now.

Not just viral memes – deepfakes are in the boardroom 

What began as viral deepfake jokes has matured into a scalable fraud economy. The question to ask now is how resilient your organisation would be in the face of a deepfake scam – because it’s time to treat them as real threats.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles