
Stress and resilience: A timeline of mental health in cybersecurity
Look at the last ten years of mental health in cybersecurity â and enter a new era of cyber resilience.
Read MoreShoutout to the new 56 cyber warriors who joined us last week. As a subscriber, you'll be among the first to receive our weekly newsletters every Wednesday, packed with all the latest news, updates, and insights.
Stay in the loop with our weekly LinkedIn newsletters. We can't wait to connect with you!
Deepfakes.
When we interviewed Suresh Sankaran Srinivasan (Group Head of Cyber Defence at Axiata) for the blog, he mentioned that deepfake tech is giving threat actors new scope to compromise victims.
Instead of writing a phishing email and hoping the victim falls for it, for example, a cyber criminal can use AI-powered face swapping technology to create a video that looks and sounds like a friend of the victim.
Thatâs exactly how a perpetrator in China earlier this year convinced a man to make a bank transfer of 4.3 million yuan (USD $622,000).
The big problem here is that deepfakes can be really hard to spot â even for people who know about them. And for those whoâve never even heard of deepfakes, itâs impossible to expect that theyâd notice if a fake video or audio recording was sent their way.
As Suresh said, âDeepfake technology poses significant risks in various sectors, including politics, finance, and social engineering attacks, as it becomes increasingly sophisticated and difficult to detect.â
It goes without saying that being able to make a video look like someone was there when they werenât really there has huge potential for harm.
Deepfakes can:
đ Enable fraud. As deepfakes become more sophisticated and difficult to spot, itâs likely weâll see more and more cases of fraud, as cyber criminals use the technology to clone individuals and motivate victims to send money or data based on the belief that theyâre sending it to someone they know and trust.
đ Influence the stock market. In May, a fake image of an explosion near the Pentagon went viral on Twitter. The explosion never happened â but it sent brief shockwaves through the US stock market, and the S&P 500 dropped by 0.3%. This was just a momentary hint of whatâs possible â a deepfake of a CEO announcing major business restructuring, for example, could rapidly change that companyâs stock price.
đ Artificially alter the reputations of individuals, brands, and entire organisations. Perpetrators could create a deepfake that made a presidential candidate appear to be having a psychotic episode or confessing to a crime. Deepfakes could tell lies about how employees are treated by company bosses, or frame innocent citizens for complex crimes. And misinformation like this â even when itâs been proven to be false â can hang around on the internet for years.
Would you be able to spot a deepfake video?
1. INDEED đ€ vote
2. I HAVE NO IDEA đŁ vote
With deepfakes increasingly widespread, and awareness still very low, itâs very difficult to create a system for identifying and mitigating the risk of deepfake capabilities.
But when a disruptive technology threatens security, it might just take another disruptive technology to solve the problem.
Weâre talking about blockchain. đ
Benjamin Gievis, Co-Founder of Parisian startup Block Expert, said to IBM: âWhat if we could create an ID and an ecosystem that could authenticate a news source and follow it wherever itâs cited or shared?â
Blockchain technology can provide that level of transparency â and newsrooms, corporations and non-profits are already working with blockchain to develop those transparent networks. The Safe. press consortium â open to anyone who distributes news â adds a stamp every time a member publishes a press release or article.
That stamp acts as a digital seal of approval, and itâs linked to a blockchain key which is instantly registered on a blockchain ledger. Then, whenever a news source with a stamp on it is appended to any other stories or references, that usage is tracked in the blockchain.
Everything is traceable. And when that traceability is visible and validated, users can see when a news item has been altered or faked.
It would take widespread adoption of technology like this to counter the risks of deepfake hacks. But it does offer hope for the future.
Learn more from Suresh on the BHMEA podcast
Have an idea for a topic you'd like us to cover? We're eager to hear it! Drop us a message and share your thoughts. Our next newsletter is scheduled for 19 June 2023.
Catch you next week,
Steve Durning
Exhibition Director
P.S. - Mark your calendars for the return of Black Hat MEA from đ 14 - 16 November 2023. Want to be a part of the action?
Join the newsletter to receive the latest updates in your inbox.
Look at the last ten years of mental health in cybersecurity â and enter a new era of cyber resilience.
Read MoreIf ransomware was The Joker and insider threats were Loki, which cybersecurity supervillain is your organisation fighting?
Read More