Insights from a CISO: What I look for when hiring new talent
We ask Max Imbiel (CISO at Bitpanda) to share his insights on hiring cybersecurity talent in cryptocurrency and fintech.
Read MoreWhen we interviewed Suresh Sankaran Srinivasan (Group Head of Cyber Defence at Axiata), we asked if there’s one thing he wishes everyone knew about cybersecurity.
He said:
“I wish everyone realised that cybersecurity is more of an attitude than a technical skill or control. While technical skills, measures and systems are crucial, the attitudes and behaviours of individuals within an organisation – both cyber professionals as well as general users – can have a substantial impact on the overall security posture.”
With that in mind, we wondered if there are any types of threats he’s seeing more of at the moment that most people simply aren’t aware of yet. And the answer was yes – there are loads of cyber risks that (most) people just don’t know about.
“Deepfakes refer to manipulated or synthetic media, such as videos or audio recordings, that are created using artificial intelligence (AI) techniques,” Srinivasan explained. “They can be used maliciously to deceive individuals or manipulate public perception.”
In 2023, we’ve already seen a number of high-profile deepfake scams hitting the media. Reuters reported that in May, a threat actor in China used deepfake tech to impersonate the friend of a victim during a video call. Using the ‘friend’s’ face and voice, the victim was persuaded to transfer 4.3 million yuan (USD $622k) to the criminal’s bank account.
Srinivasan added, “Deepfake technology poses significant risks in various sectors, including politics, finance, and social engineering attacks, as it becomes increasingly sophisticated and difficult to detect.”
“The growing proliferation of interconnected IoT devices presents new security challenges. Cybercriminals can compromise vulnerable IoT devices and assemble them into botnets, which are then used to launch large-scale attacks, such as DDoS attacks or data breaches.”
Botnet systems can work in a number of ways. In internet relay chats (IRCs), for example, they either swarm a chat in order to break the target machine or server, or already-infected bots can automatically spam chats with infected links – hoping to get a hit when an unknowing user clicks on one of those links. Or threat actors might use botnets to continuously add more devices to an infected network, creating more back doors through which they can then enter.
“IoT botnets can cause disruptions on a massive scale, targeting critical infrastructure, businesses, or individuals,” said Srinivasan.
“While quantum computing holds immense promise for solving complex problems, it also poses risks to traditional encryption algorithms. Quantum computers could potentially break current encryption methods, compromising the confidentiality and integrity of sensitive data. As quantum computing advances, organisations will need to adopt quantum-resistant encryption algorithms and strengthen their cryptographic protocols.”
When it comes to breaking encryption codes, the simplest way to do it is to try all possible keys until you land on the right one. And you can do that with current computer technology, but it’s not easy. In July 2002, a group uncovered a 64-bit key – but to do that, it took over 300,000 people and more than four years of work. To uncover a key twice that length, even the fastest supercomputer in the world would need trillions of years to find the right code.
But a quantum computing method – Grover’s algorithm – could speed that process up.
It’s not an imminent threat though. A 2018 study by the US National Academies of Sciences, Engineering, and Medicine noted that future quantum computers would need 100,000X more processing power and an error rate 100X better than current quantum computers in order to do this – and that’s unlikely to happen very soon. Which gives us time to develop more complex encryption keys that can withstand quantum power.
“As artificial intelligence and machine learning technologies advance, there is a growing concern that cybercriminals may exploit these tools to launch more sophisticated and automated attacks. AI can be leveraged to automate phishing attacks, generate convincing social engineering messages, or evade traditional security defences, making it even more challenging to detect and prevent such attacks.”
There’s another note to add here: that as well as using AI to increase the efficiency of their attacks, threat actors have already begun to leverage the high levels of interest in AI products – breaching the AI itself in order to access user data.
In May this year, OpenAI confirmed that ChatGPT had been the target of a breach. Hackers found a vulnerability in the code’s open-source library, enabling them to see the chat history of other active users; along with first and last names, contact details, and the last four digits (only) of credit card numbers.
Wherever lots of people go, lots of criminals will go too. So before anyone starts playing with the latest AI tools or experimenting with face-swapping apps, they should absolutely consider how safe their data is – and what might happen if it’s hacked.
Thanks to Suresh Sankaran Srinivasan at Axiata. Join us in Riyadh at Black Hat MEA 2023.
Join the newsletter to receive the latest updates in your inbox.
We ask Max Imbiel (CISO at Bitpanda) to share his insights on hiring cybersecurity talent in cryptocurrency and fintech.
Read MorePhilip Martin (CSO at Coinbase) talks about crypto scams, and how cybersecurity leaders can help to build trust in cryptocurrency.
Read MoreResearchers at Oligo Security uncovered an 18 year old critical vulnerability affecting all major web browsers.
Read More