Cybersecurity firm LayerX Security has discovered a serious vulnerability in OpenAI’s new ChatGPT Atlas browser that could allow attackers to inject malicious instructions directly into a user’s ChatGPT memory. Dubbed “ChatGPT Tainted Memories,” the flaw enables remote code execution and account compromise without user awareness.
Researchers warn that Atlas users may be up to 90% more vulnerable to phishing attacks than those using traditional browsers like Chrome or Edge, due to its limited built-in protections.
The vulnerability exploits Atlas’s agentic browsing capabilities, which let ChatGPT interpret and act on web content. Attackers can embed hidden instructions into websites that the AI then processes as legitimate tasks—potentially opening accounts, running commands, or accessing files on the user’s device. Because Atlas stores contextual “memory,” malicious instructions can persist across sessions, giving attackers a lasting foothold even after the browser is closed.
LayerX has reported the issue to OpenAI under responsible disclosure. While Atlas is currently macOS-only, Windows and Android versions are expected soon, heightening concern about broader exposure. Until a fix is issued, experts advise limiting use of ChatGPT Atlas for sensitive accounts, avoiding unfamiliar links, and reviewing browser activity regularly. Organisations, meanwhile, should treat AI browsers as higher-risk endpoints requiring stricter monitoring and controls.



