Eight ways microlearning makes security training more effective
Find out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreOn stage at Black Hat MEA 2022, Alex Attumalil (Global CISO at Under Armour) outlined two broad categories of adversaries:
Strategic adversaries do their research before they even get close to your network. They know what they’re looking for – they’ve identified your most critical information. When it comes to getting into your system, Attumalil noted, “they’ll try opening the door before they kick it in” – they’ll leverage unpatched vulnerabilities, or social engineering strategies, or hire a paid insider, or if all that fails they’ll deploy a Zero Day.
Crucially, when they get in, “they will close the exit that they came in on. They’re going to patch your vulnerability, because they don’t want somebody else to go bang on that door – [because] then you realise there’s an open door.”
Their goal is to get to your intellectual property. And to get there, they need time – so they cover their tracks, stay quiet, and cause minimal disruption. They’ll use that time to work out how to access, gather, store and exfiltrate your data.
Companies are often very focused on signature-based detection, or “indicators of compromise,” and using those to activate alerts. For many organisations, those signature-based anomaly alerts are the primary source of breach information for security teams to assess and manage.
But signature-based detection means we’re only detecting the ‘known-bad’, and missing out:
So, Attumalil explained, behaviour-based detection can fill in some of those gaps.
Threat actors have access to the same software and tools that other organisations have – they can buy the same tooling that you buy, and test it, and they can make sure their malware can’t be detected by your signature-based detection security.
Most organisations operate change management tools, so that in theory, every change made in the environment is codified. But there’s always a risk that a member of the IT team might make a change without following protocol, and then forget to close it afterwards.
Then there’s the problem of trusted accounts: how do you tell the difference between a trusted and verified account doing good things and doing bad things?
Threat actors can also ‘live off the land’, utilising the same tools within your environment that your admin teams use – so instead of inserting unknown signatures, they can piggy-back on the existing functionality within your network. And inside threats (like paid insiders) have privileged access within your network – and they can use that access for nefarious purposes.
“None of this has a signature,” Attumalil said. None of this is going to trigger alerts.
“Signature detection is not enough. We have to figure out how to augment it — and this is where behaviour really comes into play.”
"Signature-based detection is looking for normal malicious traffic. Then you have a decoupled system, your behaviour system, that’s going to look for behaviour in the network, an anomaly with the network, or an anomaly with the user activity.”
There are two types of behaviour-based detection that can work side-by-side. One is a network-based anomaly detection system, which looks for traffic that’s malicious in nature. It’s trained on algorithms that look for anything that could be malicious data – like remote access, and command and control.
And the second is user behaviour.
“If you think about yourself, you have a normal routine — you come to the office, you clock in, you have a cup of coffee in the morning; those are normal routines, and every time you touch a system you’re logging into something. And that is going to be a normalised behaviour for you. And an adversary who gets in using your credential doesn’t know what your routine is. And when they try to do stuff outside of that, is when that thing’s going to trigger.”
“And you bring it together: your bad activity on the asset itself, your network and your laptop or your server; and then you look at the user account, and the user account is doing stuff that they normally don’t do. And then you have detection.”
It’s based on probability. Rather than giving a clear conclusion about whether a detected anomaly is nefarious, it picks up those anomalies, alerts you to them, and enables you to rapidly assess whether a behaviour pattern does suggest that a breach has occurred.
As Attumalil put it, “the goal is to know when machines and users’ behaviour veers off the normal path. You are normalising traffic and then you are picking it up as soon as they veer off the normal behaviour.”
You can then use those alerts to look at what networks or areas of a network any user in question is accessing, and what they’re doing there; what assets they’re using; what time of day and how often they’re doing this; and so on. Using that contextual information, security teams can evaluate the threat level and quickly shut down any concerning behaviour – and use that behaviour to look deeper for malware that may have been deployed, or data that may have been compromised.
Firing at the same time, signature-based and behaviour-based detection can give you a clearer picture for malware detection – increasing the likelihood of detecting threats quickly, and reducing the time it takes to shut them down.
Join the newsletter to receive the latest updates in your inbox.
Find out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreFind out how microlearning can increase cyber resilience in your organisation and improve employee engagement with cybersecurity training.
Read MoreWhat is cyber poverty, and why do cyber inequities affect all organisations and industries? Learn how cybersecurity practitioners can work together to close the cyber poverty gap.
Read More