Mimic: The ransomware exploiting Windows search
Discover an emerging ransomware family that’s using a legitimate Windows search tool to locate victims’ files before encrypting them.
Read MoreThe entire industry talked about it all the way through 2023 – and this year won’t be any different. New AI use cases are emerging all the time, but not all of them are positive.
The problem-solving power and hyper-convenience of AI tools is countered by its potential for harm. And as threat actors explore ways to leverage AI for exploitative purposes, it’s critical that the cybersecurity industry – and every organisation operating in the digital world – works to understand and mitigate AI threats.
Here are three significant threats we expect to see more of this year.
Around the world, governments, organisations and individuals have been struggling to respond to the avalanche of deepfakes that are being released online. Already, deepfakes have been used as tools for abuse, exploitation, and misinformation. But their full potential for harm has yet to be reached.
According to research by London-based ID verification firm Onfido, deepfake fraud attempts rose by 3,000% in 2023.
And as we move through a new year, we’ll see more deepfakes used to manipulate everything from individuals to national elections and warfare.
As deepfake tech becomes more efficient and more accessible to threat actors without extensive AI knowledge, we’ll also see more low-level deepfake scams and financial cons. The scale of deepfake use is likely to grow quickly – putting more and more people at risk.
Cybersecurity teams are already leaning on the ability of AI to discover zero-day vulnerabilities in their networks. AI can execute these discoveries far quicker than human beings – and as a result, patching operations can become more efficient.
But AI can also create zero-day threats – because attackers can use AI models to find zero-days before you find them, and exploit them before you even know they exist.
The good news is that this isn’t a big problem – yet. Researchers who have begun to demonstrate such threats are keeping their research to themselves, because it has the potential to expedite the process of threat actors understanding and exploiting the potential of AI in zero-day exploits.
Malware attacks are set to rise over the coming months and years. And with the advent of AI, fully automated malware could become the most critical security threat for most organisations and individuals globally.
Automation will enable threat groups to target a much higher volume of victims – cutting out the need for time-consuming manual operations, and vastly increasing the number of attacks and the efficacy of attacks.
It’s already happening. But with increasingly accessible AI tools hacking-as-a-service opportunities, it’s going to become a bigger problem – giving attackers an edge over their targets.
Don’t get us wrong – we’re excited about the positive potential of AI. But as AI tech steps up a gear, it’s important that we plan for the risks it exposes us to, as well as the benefits it could bring.
Join the newsletter to receive the latest updates in your inbox.
Discover an emerging ransomware family that’s using a legitimate Windows search tool to locate victims’ files before encrypting them.
Read MoreWhat are non-human identities (NHIs) and why are they driving a paradigm shift in identity security?
Read MoreNew research shows that a growing number of organisations view cybersecurity as a strategic priority.
Read More