Bots, AI, and the new front line: Should blue teams start thinking like attackers?

by Black Hat Middle East and Africa
on
Bots, AI, and the new front line: Should blue teams start thinking like attackers?

Today’s attackers automate wherever they can. The bot era has fundamentally shifted the front line of cybersecurity. So in 2025 more than ever, blue teams have to learn to see through the attacker’s eyes and anticipate moves before they land.

When automation itself becomes an adversary

Defenders once owned automation; but now, attackers are taking the lead. According to a new report on global bot security from DataDome, only 2.8% of websites are now fully protected (that’s a steep drop from 8.4% just a year ago) while over 61% of domains failed every bot protection test, leaving them exposed to both simple and advanced automated attacks. 

This is more alarming when you look at it in the context of how AI-driven bot traffic has exploded. DataDome reported that LLM crawler traffic quadrupled in 2025, with AI agents now responsible for more than one in ten verified bot requests. These agents mimic human behaviour, adapt to defence logic, bypass CAPTCHAs, and evade static rules by constantly shifting tactics.

At the same time, PwC’s most recent (2025) survey on global digital trust (drawing from 4,042 business and tech executives across 77 countries) revealed that 96% of organisations say regulation has increased their cyber investment in the last year. But in spite of this spending rise, there are still major gaps in defence: only 2% of organisations have fully implemented cyber resilience actions across all areas surveyed. 

So budgets are rising but readiness is lagging. The tools many organisations still rely on were built for a world where bots were noisy and brittle. And that’s just not how it is anymore – in 2025, they’re quiet, agile, adaptive. The blue team’s old maps no longer align with the terrain.

We have to ask the right questions

In this new era, defending with rigid rules is futile. The question at the core now isn’t ‘is this traffic a bot?’ but ‘what is this traffic trying to do?’ 

Every unexpected login, every subtle API call, every shift in session behaviour requires interpretation. And that requires us to adopt attacker thinking.

Defenders must generate hypotheses: if a bot penetrates via path A, how would it escalate? What’s its lateral route or fallback plan? In practice, leading security operations centres (SOCs) are already deploying benign internal blue bots to probe systems – deliberately scanning and poking to see how the network reacts. They’re supplementing red team drills with AI-infused adversary simulations that mimic credential stuffing, adaptive reconnaissance or crawler behaviour. When your own systems struggle to contain your simulations, you know your real defences are fragile.

The mindset shift here is critical. Instead of waiting for alerts, defenders must think in terms of attack narratives. What’s the logic? What’s the sequence? What’s the turning point that reveals intent? This posture transforms defence from reactive to predictive.

Rethinking defence as intelligent response

As attackers continue to leverage automation, blue must lean into intent-based detection. Behavioural analytics, contextual scoring, real-time modelling – these become the backbone of defence. Rather than static rule blocks, systems assess whether actions align with legitimate business logic. 

And in a way, this evolution is as philosophical as it is technical. Defence teams have to shrink the cognitive distance to attacker thinking. They should ask what they’d do next if they were a bot; and treat every failed block and every delayed alert as a clue. Analysts should examine them not just as incidents, but as hypotheses to refine detection logic and threat playbooks.

Because real resilience lies in thinking broader and adapting sharper, to close the loop between simulation and live counters. 

Defence by empathy 

In today’s cyber battleground, the most secure systems will be those with the sharpest insight. A blue team that can mirror attacker logic becomes an immune system: self-probing, self-healing, ever-learning.

And ultimately, this comes down to discipline. In the era of adaptive bots and AI agents, defence requires empathy – the willingness to see your network just as an adversary would.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Building your 2026 cybersecurity spending guide

Building your 2026 cybersecurity spending guide

Planning your cybersecurity budget for 2026? We pull together forecasts from Gartner, IDC and the WEF to show where spend is shifting – from tooling to AI governance, supply chain trust, and layered controls

Read More