The philosophy of purple: Behind the rise of red + blue
We echo an Irish playwright’s warning against false knowledge in this philosophical dive into the power of purple teaming, as a bridge between red and blue.
Read More
ChatGPT Atlas is described as an AI-powered browser. The ChatGPT assistant is embedded into web browsing; enabling it to summarise pages, assist tasks, and retain ‘memories’ of browsing context for future use. It moves with you as you journey through the internet; and while that can be incredibly useful, it also creates new vulnerabilities.
Now, researchers at security platform LayerX have discovered a flaw in Atlas that allows a logged-in user to be tricked into executing a cross-site request forgery (CSRF) action, which injects hidden instructions into ChatGPT’s memory.
And because the assistant’s memory persists across sessions (and in Atlas’s case, across devices too), the injected instructions could remain active until the user deliberately clears them. In effect, a tool designed to assist users becomes a new persistence vector for attackers, as reported by The Hacker News.
The major difference between this and other browser tab exploits is that it targets the persistent memory feature of the assistant.
According to LayerX:
“The tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.”
While the vulnerability technically affects ChatGPT users in any browser, the researchers emphasise that Atlas is especially exposed. Their testing finds the Atlas browser lacks meaningful anti-phishing protections, and estimates that users of AI browsers (specifically Comet and Genspark) are up to 85% more vulnerable to phishing-style attacks than users of traditional browsers such as Google Chrome or Microsoft Edge.
The sequence of the exploit is described as:
Given the strong potential for persistence and lateral movement via this feature, organisations should treat AI-powered browsers with memory features as a distinct risk.
Practical steps might include:
This discovery by LayerX shows that embedding an AI assistant inside a browser with persistent memory creates a novel persistence and attack surface – one that traditional browser security controls can’t adequately defend against. With this in mind, organisations need to treat AI assistant browsers with memory as part of their enterprise attack surface.
And beyond that, this serves as a reminder that the attack surface continues to grow; and that as we integrate and embed AI tools across organisations, they bring new vulnerabilities. Those vulnerabilities could live in the very logic and memory of the systems we now rely on to think with us; so cybersecurity has a new job to do. We have to make users aware that the systems they’re co-working with are to be questioned, not trusted; and that an AI tool could become a new insider threat.
Join the newsletter to receive the latest updates in your inbox.
We echo an Irish playwright’s warning against false knowledge in this philosophical dive into the power of purple teaming, as a bridge between red and blue.
Read More
How do red and blue teams stay sharp? From frameworks to CTFs, discover the real skills you need to develop as you build your cybersecurity career.
Read More
Breach simulations expose how teams communicate, learn, and adapt. Discover what red-blue exercises reveal about real resilience in cybersecurity.
Read More