GenAI and the emergence of persistent memory threats

by Black Hat Middle East and Africa
on
GenAI and the emergence of persistent memory threats

ChatGPT Atlas is described as an AI-powered browser. The ChatGPT assistant is embedded into web browsing; enabling it to summarise pages, assist tasks, and retain ‘memories’ of browsing context for future use. It moves with you as you journey through the internet; and while that can be incredibly useful, it also creates new vulnerabilities. 

Now, researchers at security platform LayerX have discovered a flaw in Atlas that allows a logged-in user to be tricked into executing a cross-site request forgery (CSRF) action, which injects hidden instructions into ChatGPT’s memory. 

And because the assistant’s memory persists across sessions (and in Atlas’s case, across devices too), the injected instructions could remain active until the user deliberately clears them. In effect, a tool designed to assist users becomes a new persistence vector for attackers, as reported by The Hacker News

Why is this exploit dangerous? 

The major difference between this and other browser tab exploits is that it targets the persistent memory feature of the assistant. 

According to LayerX: 

“The tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.”

While the vulnerability technically affects ChatGPT users in any browser, the researchers emphasise that Atlas is especially exposed. Their testing finds the Atlas browser lacks meaningful anti-phishing protections, and estimates that users of AI browsers (specifically Comet and Genspark) are up to 85% more vulnerable to phishing-style attacks than users of traditional browsers such as Google Chrome or Microsoft Edge.

The sequence of the exploit is described as:

  1. User is already logged-in to ChatGPT in Atlas;
  2. User clicks or is redirected to a malicious page;
  3. The page issues a CSRF request leveraging the existing auth token;
  4. The CSRF writes hidden instructions into the assistant’s memory;
  5. Later, those instructions activate when the user requests a legitimate task, enabling malicious actions.

What should organisations do today?

Given the strong potential for persistence and lateral movement via this feature, organisations should treat AI-powered browsers with memory features as a distinct risk. 

Practical steps might include:

  • Inventory and restrict use of agentic browsers (such as ChatGPT Atlas, Comet, and Genspark) on high-privilege machines.
  • Disable or limit the assistant’s memory feature where possible, especially in sensitive environments.
  • Segregate use of agentic browsers from critical systems (source repositories, admin consoles, internal SaaS).
  • Enhance monitoring for unusual browser behaviour –  for example, the assistant initiating background fetches or external code execution.
  • Provide training across the organisation that emphasises the risk of hidden instructions being injected via seemingly benign web links.

A sign that the attack surface continues to grow 

This discovery by LayerX shows that embedding an AI assistant inside a browser with persistent memory creates a novel persistence and attack surface – one that traditional browser security controls can’t adequately defend against. With this in mind, organisations need to treat AI assistant browsers with memory as part of their enterprise attack surface. 

And beyond that, this serves as a reminder that the attack surface continues to grow; and that as we integrate and embed AI tools across organisations, they bring new vulnerabilities. Those vulnerabilities could live in the very logic and memory of the systems we now rely on to think with us; so cybersecurity has a new job to do. We have to make users aware that the systems they’re co-working with are to be questioned, not trusted; and that an AI tool could become a new insider threat.

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles