Unknown AI, unknown risk: How invisible AI use creates new attack paths

by Black Hat Middle East and Africa
on
Unknown AI, unknown risk: How invisible AI use creates new attack paths

The AI industry has already crossed the line between experimentation and everyday use. Cycode’s survey of the state of product security for the AI era reports that 100% of surveyed organisations now have AI-generated code in production, and 97% are using or piloting AI coding assistants such as GitHub Copilot, Cursor, or ChatGPT. Developers aren’t waiting for governance to catch up; they’re now shipping AI-influenced code as a matter of routine.

But security teams are working with far less certainty. Only 19% of security leaders say they have complete visibility into AI use across development, and 65% report that their overall security risk (including vulnerability count) has increased since adopting AI coding assistants. 

Many describe the same issue: AI is accelerating delivery, but it’s simultaneously eroding the visibility needed to secure it.

Then there’s Akamai’s 2025 report on the role of AI in web application and API security. It highlights a 33% rise in global web attacks, alongside the emergence of new AI threats and vulnerabilities across modern applications. AI is boosting both innovation and exploitation – often faster than organisations can instrument oversight.

The result is a growing layer of AI unknowns that are embedded across codebases, dependency graphs, CI/CD pipelines and internal tools. Invisible, untracked, and increasingly risky

Shadow AI and the rise of invisible dependencies

Cycode’s data makes it clear that AI adoption is outpacing governance. More than half of organisations (52%) lack formal, centralised AI governance, leaving developers and business units to adopt AI tools independently. AI isn’t creeping slowly into enterprises – it’s arriving openly, driven by convenience and productivity gains rather than policy.

The technical consequences are sharper still. Endor Labs’ recent dependency management research shows how AI coding agents behave when selecting open-source packages: not cautiously, but confidently. Their analysis found that 34% of dependency versions suggested by AI agents didn’t exist at all (hallucinated in version form) and 49% of the versions that did exist carried known vulnerabilities. In the default configuration, only one in five dependency versions recommended by AI agents were safe.

When organisations don’t know where AI is being used, they adopt these weaknesses blindly.

Then comes the integration layer: the rapidly expanding ecosystem of Model Context Protocol (MCP) servers – connectors that allow AI agents to access files, run commands, query databases, and interact with internal systems. Endor Labs examined more than 10,000 MCP servers, uncovering an ecosystem with significant structural risk. 75% of MCP implementations are built by individuals rather than organisations; 41% have no licence information; and 82% rely on sensitive APIs such as file system access, code execution, or SQL operations. 

Each MCP server introduces around three known vulnerable dependencies on average.

If security teams don’t know which AI assistants, plugins or MCP servers their developers are using (or how they are wired into internal systems), this is far more serious than a visibility gap alone. They’re inheriting an unbounded trust model, in which AI agents can quietly mediate access to critical systems through connectors no one has reviewed.

At that point, ‘shadow IT’ becomes ‘shadow AI’ – except now, the shadow has its own execution layer. 

The hidden supply chain behind your supply chain

In turn, this creates another problem: AI is expanding the software supply chain in unpredictable ways. Cycode reports that AI-generated code is now the number one blind spot for security leaders, followed closely by AI tool usage and supply chain exposure. 

Developers might, for example, unknowingly import deprecated libraries, dormant packages, or MCP connectors maintained by a single developer on GitHub. These components weren’t designed with enterprise security in mind – but they’re increasingly part of production systems. 

When AI agents generate configuration files, scripts or infrastructure-as-code; or when staff use unvetted AI assistants to create automation; organisations risk embedding opaque, unauthorised components into critical applications.

It’s not intentional – there’s no nefarious mastermind making this happen. It’s simply the predictable result of rapid adoption without a unified map.

We have to see before we can secure 

We don’t look at this research and immediately think we should slow down AI adoption across industries. But we do think that developers, security professionals, and AI service providers need to work together to shed light on where and how adoption is happening. 

In positive news, organisations are already moving in this direction. Cycode notes that 97% plan to consolidate their AppSec tooling in the next year, seeking a unified view of vulnerabilities, dependencies and code changes across teams. That instinct needs to extend to AI itself: which tools are approved, where models are integrated, how agents behave, and what guardrails they operate within.

Visibility must become a first-class security objective. That begins with mapping where AI tools have taken root – across IDEs, internal applications, pipelines and MCP servers – and establishing governance that developers can actually follow. Endor Labs’ MCP analysis shows that this layer is now part of the software supply chain, and it requires the same discipline: dependency inspection, licence scrutiny, runtime monitoring, and standardised controls.

The issue emerging through this research isn’t that AI is inherently insecure, but that it remains largely unseen. And in cybersecurity, unseen systems become ungoverned systems – and ungoverned systems are where attackers thrive.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Why CISOs are saying MFA isn’t enough

Why CISOs are saying MFA isn’t enough

Security leaders are losing confidence in traditional multi-factor authentication (MFA). Find out why zero-trust and AI-driven identity are pushing CISOs beyond passwords and codes.

Read More