Managing truth: Misinformation and disinformation in 2025

by Black Hat Middle East and Africa
on
Managing truth: Misinformation and disinformation in 2025

Will the truth exist on the internet in 2025? 

Well, yes. But a lot of mistruths will exist at the same time.

Research suggests that 48% of people across 27 countries have believed fake news, before later finding out that the story they’d trusted was fabricated. Misinformation is everywhere; it’s been identified in content published by trusted news outlets, and it’s rife on social media. The digital era has created a hurricane of possibility for anyone who wants to engineer public feeling based on false stories, as well as anyone who wants to trick individuals into parting with their resources through targeted disinformation. 

A peer-reviewed study in 2021 found that misinformation sources on Facebook received 6X more engagement than reputable news sites. Fake news is often provocative and specifically designed to drive comments and shares – so it spreads fast and has real potential to do damage. 

With generative AI, deception has become more sophisticated and widespread. The potential impact is so significant that the World Economic Forum has labeled misinformation and disinformation the ‘biggest short-term threat to the global economy’.

Can we use GenAI to take back control over disinformation and truth? 

Generative AI is a driving force behind false information. But it’s this same technology that we need to embrace as part of the solution. We need to leverage GenAI not just in cybersecurity, but across all organisations that handle impactful information – everyone from journalists and policymakers to marketers and laypeople need to know how they can use GenAI tools to assess information and identify signs that it might not be genuine. 

To make that happen, cybersecurity practitioners have a job to do: we need to develop tools and services that position GenAI for good, enabling everyone to use it against threat actors in the battle for truth. 

Already, marketers can use GenAI to craft personalised marketing messages that appeal to specific audiences or individuals. Researchers at MIT have found that AI has the potential to study our digital behaviours (the things we do when we think no one’s watching; in our emails and social media posts, for example) and use that understanding of what we do online to mimic our decision making with 85% accuracy. 

This could offer useful applications, of course (AI could ease the pressure of decision-making by making decisions for us, much quicker than we’d be able to do ourselves). But it also offers a new playground of opportunity to malicious actors who can get ahead of us and predict our behaviour so they can craft highly targeted and effective attacks. 

It’s critical that 2025 is the year we build the solutions that take GenAI capabilities and use them to identify false information and malicious actors in real time. Media companies, social media companies, and individuals need effective tools to recognise misinformation and stop it in its tracks – before it goes too far. Because stopping false information early is the only way to minimise its negative impact. 

True and false: The philosophical debate 

The technological management of misinformation and disinformation is complicated by the philosophical uncertainty that lies just underneath it. German-American philosopher Hannah Arendt wrote extensively on the problem of truth; because there can be many truths depending on the belief system you’ve been raised within, and there’s no objective ‘Truth’ that all of humanity can agree on. 

So if we all live by different truths, how do we hand over authority to one organisation or official body to decide what is true and what is false? And at what point does the effort to curb misinformation encroach on the freedom of each person to think for themselves?

This means that even with the ideal tools to identify and remove false information, the challenge of managing it will always be complicated. We’ll never reach a universal agreement on what constitutes harmful information or deception – but hopefully through collaboration and exploration, we can come to some agreements on the characteristics of misinformation that are harmful, and work to stop the spread of information with those characteristics. 

We’ll look back on this time period as the time when we established the framework for truth and trust in the digital era. It’s not as simple as making the right tools available, because truth itself is a concept, not a fact. We need to work to come to a set of agreements that enable truth and deception to be categorised effectively by advanced technologies – and the need to do this is throwing global perceptions of what’s real (and what isn’t) under the collective microscope. 

Schools are teaching children how to identify misinformation 

Cybersecurity practitioners know how important the human element is when it comes to managing security. So one real positive that we’re seeing right now is this: 

Schools in some countries are starting to teach students about misinformation, and how to evaluate media and decide for themselves whether it’s likely to be true or not. 

Education systems are well-placed to teach the critical thinking skills that could protect individuals, communities, and societies from the negative impact of false information in the future. So as an industry and a field of knowledge, cybersecurity could support this learning with resources and tools schools can use to offer valuable training to their pupils. 

People are the most important factor in the impact of misinformation. Because if people can’t be easily manipulated, threat actors will have a much tougher field to play on. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles