Why behaviour-based detection is crucial to secure systems

by Black Hat Middle East and Africa
on
Why behaviour-based detection is crucial to secure systems

On stage at Black Hat MEA 2022, Alex Attumalil (Global CISO at Under Armour) outlined two broad categories of adversaries: 

  1. Tactical adversaries. Their goal is economic gain, they’re opportunistic, and they want to get into your system and be disruptive. You won’t have to go searching for them – they’ll tell you they’re there. They’ll send a ransom note.
  2. Strategic adversaries. These, Attumali said, are “the ones you’ve got to worry about.” They’re looking for your intellectual property; the crown jewels of your company. And they’re playing a long game, with the intention of gaining the highest possible return. Instead of being opportunists, they’re highly targeted – by the time they’re in your network, they already know exactly what they’re looking for. 

What’s so dangerous about strategic adversaries? 

Strategic adversaries do their research before they even get close to your network. They know what they’re looking for – they’ve identified your most critical information. When it comes to getting into your system, Attumalil noted, “they’ll try opening the door before they kick it in” – they’ll leverage unpatched vulnerabilities, or social engineering strategies, or hire a paid insider, or if all that fails they’ll deploy a Zero Day. 

Crucially, when they get in, “they will close the exit that they came in on. They’re going to patch your vulnerability, because they don’t want somebody else to go bang on that door – [because] then you realise there’s an open door.” 

Their goal is to get to your intellectual property. And to get there, they need time – so they cover their tracks, stay quiet, and cause minimal disruption. They’ll use that time to work out how to access, gather, store and exfiltrate your data. 

Why signature-based detection isn’t enough

Companies are often very focused on signature-based detection, or “indicators of compromise,” and using those to activate alerts. For many organisations, those signature-based anomaly alerts are the primary source of breach information for security teams to assess and manage. 

But signature-based detection means we’re only detecting the ‘known-bad’, and missing out: 

  • Cases of the ‘known-good’ being bad, or doing bad things – i.e. an account with approved credentials that has been compromised.
  • Unknowns. A Zero Day attack, for example, doesn't have a signature within your system – so it isn’t picked up by signature-based detecting. 

So, Attumalil explained, behaviour-based detection can fill in some of those gaps. 

How behaviour-based detection can enable a more comprehensive security system

Threat actors have access to the same software and tools that other organisations have – they can buy the same tooling that you buy, and test it, and they can make sure their malware can’t be detected by your signature-based detection security. 

Most organisations operate change management tools, so that in theory, every change made in the environment is codified. But there’s always a risk that a member of the IT team might make a change without following protocol, and then forget to close it afterwards. 

Then there’s the problem of trusted accounts: how do you tell the difference between a trusted and verified account doing good things and doing bad things? 

Threat actors can also ‘live off the land’, utilising the same tools within your environment that your admin teams use – so instead of inserting unknown signatures, they can piggy-back on the existing functionality within your network. And inside threats (like paid insiders) have privileged access within your network – and they can use that access for nefarious purposes. 

“None of this has a signature,” Attumalil said. None of this is going to trigger alerts. 

“Signature detection is not enough. We have to figure out how to augment it — and this is where behaviour really comes into play.”

"Signature-based detection is looking for normal malicious traffic. Then you have a decoupled system, your behaviour system, that’s going to look for behaviour in the network, an anomaly with the network, or an anomaly with the user activity.” 

There are two types of behaviour-based detection that can work side-by-side. One is a network-based anomaly detection system, which looks for traffic that’s malicious in nature. It’s trained on algorithms that look for anything that could be malicious data – like remote access, and command and control. 

And the second is user behaviour. 

“If you think about yourself, you have a normal routine — you come to the office, you clock in, you have a cup of coffee in the morning; those are normal routines, and every time you touch a system you’re logging into something. And that is going to be a normalised behaviour for you. And an adversary who gets in using your credential doesn’t know what your routine is. And when they try to do stuff outside of that, is when that thing’s going to trigger.”

“And you bring it together: your bad activity on the asset itself, your network and your laptop or your server; and then you look at the user account, and the user account is doing stuff that they normally don’t do. And then you have detection.” 

It’s based on probability. Rather than giving a clear conclusion about whether a detected anomaly is nefarious, it picks up those anomalies, alerts you to them, and enables you to rapidly assess whether a behaviour pattern does suggest that a breach has occurred. 

As Attumalil put it, “the goal is to know when machines and users’ behaviour veers off the normal path. You are normalising traffic and then you are picking it up as soon as they veer off the normal behaviour.” 

You can then use those alerts to look at what networks or areas of a network any user in question is accessing, and what they’re doing there; what assets they’re using; what time of day and how often they’re doing this; and so on. Using that contextual information, security teams can evaluate the threat level and quickly shut down any concerning behaviour – and use that behaviour to look deeper for malware that may have been deployed, or data that may have been compromised.

Firing at the same time, signature-based and behaviour-based detection can give you a clearer picture for malware detection – increasing the likelihood of detecting threats quickly, and reducing the time it takes to shut them down.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Security training and freelancers

Security training and freelancers

Freelancers are often asked to complete a company's security training and awareness courses, but few companies communicate clearly about this in hiring conversations.

Read More
Neurodiversity in Cybersecurity - Part 1

Neurodiversity in Cybersecurity - Part 1

Guided by Stuart Seymour (CISO at Virgin Media), we look at the value of neurodiverse talent in cybersecurity – and what the industry can do to welcome neurodiverse professionals.

Read More
Machine learning in cybersecurity

Machine learning in cybersecurity

Saeed Abu-Nimeh (Founder and CEO at SecLytics) is one of the world’s leading experts on machine learning in cybersecurity – and he’s driving innovation to streamline security operations with ML.

Read More