Critical touchpoints for cybersecurity professionals
Stay ahead of the evolving threat landscape .
Read MoreWelcome to the new 211 cyber warriors who joined us last week. 🥳 Each week, we'll be sharing insights from the Black Hat MEA community. Read exclusive interviews with industry experts and key findings from the #BHMEA23 keynote stage.
Keep up with our weekly newsletters on LinkedIn, Subscribe here.
AI-enabled vishing.📲
Because cybersecurity researchers at WatchGuard predict it’ll be on the rise in 2024.
Think phishing, but with a human voice. Except the emergence of generative AI means the voice doesn’t have to be a real human – it can even be a deepfaked copy of a voice the victim already trusts.
Voice over Internet Protocol (VoIP) combined with automation tech already enable scammers to mass-dial thousands of people. But when someone has answered a call, a human voice has to be there to talk to them, and undertake the social engineering component of the scam.
It’s expected that highly convincing deepfake audio, created with generative AI tools, will replace the need for scammer-to-victim chats – allowing threat groups to engage in hundreds or thousands of artificial conversations at a time. And this will vastly increase the scale and impact of vishing.
Have you ever been fooled by a vishing call?
1. Yes 😞 vote
2. No 😬 vote
The Photon Research Team recently told Information Security Media Group (ISMG) that cybercriminals are now actively using deepfake audio or video tech to deliver convincing, credible impersonation scams.
And worryingly, their research found that cybercriminals are leveraging commercially available tools to do this.
A specialised attack group sharing information on a Russian-speaking cybercrime forum are offering vishing services to other threat actors for a fee of USD $1k+; leveraging those commercially available tools to create and deploy clones and customised voice robots.
And they have plenty of customers.
Impersonation using deepfake tech is becoming incredibly convincing – so much so that grandparents can be convinced they’re on the phone to their grandchild when it is, in fact, an AI bot.
One increasingly common attack identified in the Photon research targets bank customers. The victim picks up the phone and hears a prerecorded message that seems to be from their bank, asking them to provide their account details – and their voice response is recorded and used to bypass the bank’s biometric security measures, giving the attacker access to the victim’s account.
We love a tiny silver lining: and one possible barrier to entry for vishing-hopefuls is that customised and credible voice impersonations can cost more to create than lower quality audio bots.
But it really is a very small problem for attackers. Because as Photon told ISMG,
"If an attacker obtains the right sample during the reconnaissance phase of an attack, they may not need voice impersonation tools, which are too expensive and complex for many cybercriminals. Instead, an attacker may edit their sample to produce whatever sounds they are looking for."
More vishing attacks are coming in 2024.
Prepare yourself, educate your team, and speak to cold-callers with caution.
Do you have an idea for a topic you'd like us to cover? We're eager to hear it! Drop us a message and share your thoughts. Our next newsletter is scheduled for 20 December 2023.
Catch you next week,
Steve Durning
Exhibition Director
Join us at Black Hat MEA 2024 to grow your network, expand your knowledge, and build your business.
Join the newsletter to receive the latest updates in your inbox.
Stay ahead of the evolving threat landscape .
Read More3 organisations working to strengthen cybersecurity governance.
Read More