Can generative AI create new cyberthreats?

by Black Hat Middle East and Africa
on
Can generative AI create new cyberthreats?

When pressed for an opinion about OpenAI’s ChatGPT, Apple Co-Founder Steve Wozniak told Business Insider it’s “pretty impressive,” but could “make horrible mistakes by not knowing what humanness is.”

From a cybersecurity perspective, though, the dangers of ChatGPT – and other generative AI models – have the potential to be far greater than robot error. Since its release in November 2022, cybersecurity experts have been quietly assessing how the tool could be used with malicious intent. And new research released by Blackberry found that 51% of IT decision-makers believe there will be a successful cyberattack credited to ChatGPT within a year.

ChatGPT = faster and more complex phishing scams

In an interview with the Guardian, UK-based cybersecurity firm Darktrace warned that it has recorded a growth in complex scams since the launch of ChatGPT. The number of email attacks against the firm’s customers hadn’t changed significantly overall, but the number of email scams involving complex language (with more text, better punctuation use, and more advanced semantics) had increased. A Darktrace representative suggested this indicates that hackers are using generative AI to develop scams that rely more heavily on building user trust, and using that trust to create new vulnerabilities.

A generative AI model with a big dataset of phishing emails to work with can automatically generate a large volume of new (and very convincing) phishing emails – all of them unique and hard to detect. One recent report by New Scientist found that AI also makes it far cheaper to build out a phishing campaign, with criminals able to generate phishing emails at 4% of the cost when compared to writing them manually.

Similarly, social engineering attacks using phones can use generative AI to create an immediate sense of trust with the person who picks up the phone; personal speech synthesiser Vall-E, for example, can create a near-perfect match of someone’s voice based on an audio recording as short as three seconds.

As Matt Duench (Senior Director of Product Marketing at Okta) told Forbes, one of the key defences against phishing campaigns has – up until now – been the fact that the recipient of the phishing attempt can look for oddities in text or audio, like bad spelling or unconventional grammar, and take those oddities as warning that the content might be malicious. But the accuracy of generative AI in recreating native-speaker language patterns, and even integrating emotional speech cues into AI-generated audio, will make that defence a thing of the past.

Essentially, all of this amounts to a change of strategy. AI is giving hackers a quick way to enhance scams and increase the volume of threat attempts. But as cybercriminals explore the functionality of generative AI further, this tech could also enable a whole new wave of cybercriminals to begin malicious activity.

Can generative AI write malware?

Before ChatGPT took the world by storm cybercriminals were already experimenting with methods to deploy AI for nefarious purposes. Like in 2019 when scammers used an AI voice tool to create a deepfake of Elon Musk which enabled them to steal cryptocurrency (amounting to almost USD $380,000) from social media users and investors.

And the scope for experimentation is growing all the time as new AI tools are released to market. Across the cybersecurity industry, experts are discussing the potential that as generative AI becomes more advanced, it could be used to create bespoke malware – meaning that criminals who don’t have the skills to write malware code themselves could use AI to create that code, removing a major barrier-to-entry for threat actors.

Actually, as reported by the Washington Post, ChatGPT can already write malware – but it doesn’t do a very good job of it yet. For as long as generative AI is being used to enhance existing attack strategies, the existing principles of cybersecurity retain roughly the same level of efficacy (in theory, at least). We’re dealing with the same kinds of attacks, so the same lines of defence apply.

It’s when generative AI matures, and enables criminals to create and execute attacks in novel ways, that the potential threat of this technology could really come into its own. One of the limitations of current models, for example, is that although they’re trained on a vast scope of data points, they don’t have unlimited information to generate from – right now, they’re not tapped in the ‘live’ internet.

When limitations like this are changed or removed, the scope for growth in functionality and use cases will expand – and could significantly change the landscape for cybersecurity.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Security training and freelancers

Security training and freelancers

Freelancers are often asked to complete a company's security training and awareness courses, but few companies communicate clearly about this in hiring conversations.

Read More
Neurodiversity in Cybersecurity - Part 1

Neurodiversity in Cybersecurity - Part 1

Guided by Stuart Seymour (CISO at Virgin Media), we look at the value of neurodiverse talent in cybersecurity – and what the industry can do to welcome neurodiverse professionals.

Read More
Machine learning in cybersecurity

Machine learning in cybersecurity

Saeed Abu-Nimeh (Founder and CEO at SecLytics) is one of the world’s leading experts on machine learning in cybersecurity – and he’s driving innovation to streamline security operations with ML.

Read More