Human error: Only 10% of the problem

by Black Hat Middle East and Africa
on
Human error: Only 10% of the problem

When users make a mistake on your system, who do you blame?

Ira Winkler (CISO and VP at CYE) came to Black Hat MEA 2022 to talk about human error. We all know it’s a serious issue – as Winkler pointed out, “90% of all losses result from attacks targeting users.”

They’re down to insider threats; whether the insider was malicious or not.

But while many of us jump straight to awareness training as the obvious solution to this, Winkler urged us to take a step back. Is awareness training alone enough to mitigate the risk of human error? Or should we look at human error in a different way – and create a better strategy to minimise it?

Don’t blame your users

“One day I was speaking at a hacker conference,” Winkler said. Someone was handing out stickers with the words: don’t click on stuff. “And I was in the buffet line, and somebody in front of me said, ‘Oh I need a whole bunch of these stickers, I’ve got a whole bunch of users and my users keep clicking on stuff, again and again.’”

“And I’m like, wow, you must give your users a lot of stuff to click on.”

In the story, the person complaining about his users’ over-enthusiastic clicking style was upset by this response – but the point, as Winkler put it, is simple:

“Why do you keep giving them stuff to click on if you don’t want them to click on it?”

It’s easy to blame the users. But that does a disservice to your entire cybersecurity infrastructure – because you don’t take responsibility for what you can change, and instead spend your time complaining about what you can’t change (the inevitability of people clicking on clickable things).

“If a user is doing something on your system, they’re only doing it on your system because you gave them that data, and then you gave them the ability to activate that data. You give them the ability to do things. And if they’re doing anything you don’t like, that’s on you.”

Cybersecurity isn’t unique

In 2020, a 17-year-old compromised some of the most-followed accounts on Twitter, including Joe Biden and Bill Gates. He did some basic research and “found out that he was able to phish people on Twitter,” Winkler said; “he needed access to multi-factor authentication, so he set up a man-in-the-middle attack.”

He created a webpage and sent it to some people, called them to ‘social-engineer’ them so they went ahead and activated on his page, and so he got their account logins. “Then phishing around Twitter he found that there was a central storage system where all these utilities were kept, and that was how he changed passwords. He just got access to those tools.”

One major account wasn’t compromised: Donald Trump’s. And that was because two years earlier a disgruntled insider had deleted Trump’s account – so when it was reinstated, Twitter had placed protections on that account. But not on everyone else’s.

Twitter described the hack as a “coordinated social engineering attack” against its employees.

But the reality is much more straightforward (and perhaps more embarrassing) than that: it was a curious young person who leveraged human error to access high-profile accounts. And he was able to do that not because of human error, but because Twitter’s security wasn’t good enough.

“In cybersecurity we believe, for some reason, we are the only profession in human history that ever had to deal with the problem of human error. There are other sciences that all proactively acknowledge humans will make mistakes and we need ways to deal with it.”

Safety science was the first industry to ever do that. And initially, the focus was on why the user chose to do what they did, and why the user was wrong.

“Then they started to realise that sometimes it wasn’t the user who caused their own death. It was the system, the environment.”

“We had the same attitude in cybersecurity. If our users make a mistake we just need smarter users.”

But that’s never going to work

Instead, cybersecurity needs to follow the new school of thought in safety science:

  • A user is a part of the system, just as a computer is
  • Safety incidents are a result of a failure of the whole system
  • All enabling factors have to be reviewed
  • The user is in the proximity of the error – they’re not the reason for the error
  • Proximity and user error are both just symptoms of what’s wrong with the system

Actual human errors – the real ones – are due to carelessness, or lack of training, or ignorance, or occasionally malice. But they only account for 10% of the user errors that occur in a system. The other 90%? That’s the system.

And because of this, awareness training is inadequate to prevent at least 90% of the breaches that leverage human error. Awareness training is part of the bigger picture – but it has to be targeted, effective, and part of a broader strategy.

“What you need,” Winkler said, “is governance.”

That doesn’t mean a set of policies that no one ever looks at until they’re trying to cover their backs after a breach. “Governance should drive how you do things correctly. Governance should tell employees, not just to be aware of a phishing message, but how you detect a phishing message, step by step.”

Governance should be built out comprehensively – through endpoint technology, then user experience (“how do we provide an environment that stops users from making errors?”), then nudges (those daily visible reminders to maintain good security hygiene), then awareness; and so on.

Telling people to be aware isn’t enough. The system needs to be tight. And cybersecurity practices need to be taught through clear, proactive and accessible governance.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles