Shifting eyes to the threats of the future

by Black Hat Middle East and Africa
on
Shifting eyes to the threats of the future


Welcome to Black Hat MEA's new content platform. Each week we bring you insider insights from our community – including interviews with industry experts, new cybersecurity innovations, and key moments from the #BHMEA keynote stage.

If you haven’t already, subscribe here to receive these newsletters on your LinkedIn feed.


📣 This week we’re focused on…

Generative AI (hello, ChatGPT) and the increasingly spacious attack surface that cybercriminals have to play with.

What’s generative AI up to?

Well; a lot.

Since its release into the wild of the internet in November 2022, users have asked ChatGPT the AI-powered chatbot, to do plenty of bizarre and questionable things – like build a magical potato, cheat on school essays, and write fully-formed sci-fi stories.

But in cybersecurity, we’re thinking less about what’s happening right now and more about what might happen in the future.

Why?

Because ChatGPT is evidence of advancements in AI that could change the way loads of things work. Including cyberattacks.

Recent research by Blackberry found that 51% of IT decision-makers believe there will be a successful cyberattack credited to ChatGPT within a year.

And while AI can’t reliably write malicious code on its own right now, it’s already making work easier for cybercriminals: increasing the speed and volume of phishing attacks, for example, while making it more difficult for humans to spot language errors in phishing emails and audio recordings that would alert them to potential nefariousness.

Some numbers on generative AI and cybersecurity

This review of studies by a group of Nigerian computer scientists dug into 936 papers on generative AI and cyberthreats, and then narrowed that down to the 46 most relevant articles.

The results of reviewing those 46 papers showed that:

➡️ 56% of AI-driven cyberattack techniques identified were in the access and penetration phase

➡️ 12% were in the exploitation phase and the command and control phase

➡️ 11% were using AI techniques in the reconnaissance phase

➡️ 9% were deploying AI in the delivery phase

And overall, the researchers concluded that existing cybersecurity infrastructure will “become inadequate to address the increasing speed, and complex decision logic of AI-driven attacks.”

📰 Read our blog: Can generative AI create new cyberthreats?

💬 Share on Twitter

💭 Imagine this.

Suspend disbelief for just a moment, and imagine this:

A world in which AI-based cybersecurity attacks have evolved beyond current recognition. Techniques are both novel and advanced, making them incredibly difficult to detect – and even once they’ve been detected, they’re hard to shut down.

Simultaneously the attack surface for criminal activity is growing bigger. Not only do threat actors have new vulnerabilities to work with – because of socio-economic changes around the world that are influencing the way people work, and a boom in IoTs and remote working devices that are hard for organisations to keep track of. But they also have AI. 🤯

They can launch bigger and more destructive attacks, and they can launch them faster, and at a low cost. And they can do that on this big surface. They can reach more victims, and they can cause more damage.

OK, we’re doing a little speculative fear-mongering here – but the more modest reality is that just as generative AI is becoming more competent, the attack surface is growing. These two things are occurring at the same time.

So cybersecurity innovators have to stay ahead.

📰 Read our blog: The attack surface grows

💬 Share on Twitter


Black Hat MEA is back again from 📅 14 - 16 November 2023. Interested to be a part of it? Register now.

Join the conversation online using #BHMEA23 and @Blackhatmea

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles