Eight ways microlearning makes security training more effective
Find out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreWe know – we’re stating the obvious by pointing out that AI is changing the way cybersecurity works. But with AI a major topic of conversation across all tech sectors, we wanted to look at some of the ways cybersecurity innovators are deploying machine learning – and how AI might generate new opportunities to strengthen security architecture as it becomes more advanced.
According to this 2022 analysis of cybersecurity trends by McKinsey, cyber criminals are increasingly integrating AI into their attack strategies. Machine learning and automation are making attacks faster and increasing their effectiveness rate.
And a key solution to battling AI is with AI. One of the primary reasons for that is, quite simply, speed: research by IBM shows that extended detection and response times when attacks occur causes huge economic losses to victim companies, with the global average cost of data breaches in 2022 reaching USD $4.35 million. Companies with AI and automation programs in place, however, saved $3.05 million – because they were able to detect attacks in real-time, and respond rapidly.
Right now, generative AI (and particularly ChatGPT) is a focus for the cybersecurity industry, with both positive and malicious use cases sparking debate. You can read our blog post about that here. And in this article, we talk about four other key areas of deployment for AI.
Zero trust is an increasingly standard framework for managing an organisation’s entire connected network – and it requires constant visibility over that network.
AI-powered behavioural analytics enable visibility and monitoring that works, by establishing a baseline for ‘normal’ behaviour (based on analysis of past behaviour) and then picking out any points in the network that don’t appear to fit within the ‘normal’ spectrum at any given moment.
Normal behaviour covers variables including (but not limited to) when and where users make login attempts and the type of device they’re using. And by analysing a set of pre-established variables, AI can quickly detect anomalies and put the brakes on – so that if an anomaly does correspond with a real threat, it’s instantly shut down.
This approach is increasingly accessible to organisations of all sizes, with top providers including CrowdStrike, Ivanti, Microsoft, and VMWare Carbon Black.
A crucial goal of the behavioural analytics described above is to manage network assets, and control endpoint security. According to an IBM report on AI and automation in cybersecurity, this is a core focus for organisations that are implementing AI into their cybersecurity strategy – they want to get a clearer view of their complete digital landscape and understand all their weak spots.
The report found that 35% of surveyed enterprises are using AI to discover endpoints and improve how they manage them.
That same IBM report noted that 34% of enterprises are using AI for vulnerability and patch management – and that’s predicted to rise to over 40% within the coming three years. This all ties into zero trust framework goals, within which an organisation has a holistic view of its network and doesn’t have to rely on human diligence to protect its IP and data.
A survey on patch management by Ivanti found that 71% of IT and security professionals think patching is too complex, and negatively impacts the time they’re able to spend on urgent projects. But patching isn’t simple, and it does require an investment of time from skilled teams.
How, then, can an enterprise ensure that urgent patching happens when it needs to happen, without cutting too much into time needed for other urgent work?
Enter: AI.
Patch management solutions – like those provided by Crowdstrike Falcon, Blackberry, and Ivanti neurons for Patch Intelligence – can use machine learning intelligence to automate some of the time-consuming repetitive tasks involved in patch management, and prioritise tasks that need to be undertaken by humans; saving time and freeing up teams to put other projects higher on their priorities lists whenever possible.
As the cybersecurity landscape becomes more complex, our chances of keeping up are significantly reduced. But if we can understand what an attacker is aiming for, we’ll be better placed to stop them in their tracks – and divert a threat before it causes significant harm.
While AI-based cybersecurity currently has a focus on indicators of compromise (IOCs) to alert an organisation when a breach has occurred, AI can also be used to provide indicators of attack (IOAs). Essentially, IOAs detect the intent of an attacker – getting to grips with their goals.
CrowdStrike launched a pioneering IOA solution in 2022, combining human skill and cloud-based machine learning to generate IOAs which are, essentially, data that details behavioural events on the part of an attacker. These IOAs then build a picture of malicious behaviour and specific malicious intent – and during the testing phase alone (while Crowdstrike was working to implement IOAs on its platform), they were used to identify over 20 malicious patterns that had never been seen before.
It’s a race: cyber criminals are working to integrate AI into their attacks, just as cybersecurity professionals are working to deploy AI as a protective measure. But the use cases of AI to date can give us hope that security is possible – and we’re watching closely to see what happens next.
Join the newsletter to receive the latest updates in your inbox.
Find out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreFind out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreWhat is cyber poverty, and why do cyber inequities affect all organisations and industries? Learn how cybersecurity practitioners can work together to close the cyber poverty gap.
Read More