REGISTER

email 14 48

Cybersecurity researchers at Tenable have uncovered seven vulnerabilities in OpenAI’s ChatGPT, specifically affecting its GPT-4o and GPT-5 models. These flaws could allow attackers to steal personal data from users’ memories and chat histories without their knowledge. OpenAI has since patched several of the issues, which were found to make the chatbot susceptible to indirect prompt injection attacks—a manipulation technique that tricks large language models into executing hidden or malicious commands.

Among the vulnerabilities were exploits that allowed malicious instructions to be embedded in trusted websites or search results, as well as a “zero-click” attack that could be triggered simply by asking ChatGPT about a compromised site. Other weaknesses included the ability to bypass safety mechanisms, inject hidden instructions into conversations, and poison the model’s memory through summaries of infected webpages. These vulnerabilities collectively highlight the challenges of keeping AI models secure as they interact with live web content.
The disclosure follows a string of recent discoveries showing how prompt injection techniques are being weaponized against major AI systems, including Anthropic’s Claude, Microsoft 365 Copilot, and GitHub Copilot. Researchers have demonstrated that even limited access to training or connected systems can enable attackers to exfiltrate sensitive data, manipulate outputs, or bypass safety filters through cleverly crafted prompts or embedded code.

Experts warn that as AI systems become increasingly integrated with external tools and data sources, their attack surface will continue to grow. “Prompt injection is a known issue with the way LLMs work, and it will probably not be fixed systematically in the near future,” Tenable noted. Academics also caution that AI models trained on “junk” or poisoned data could suffer from long-term degradation and bias, turning AI competition into what researchers call a “race to the bottom,” where performance gains come at the cost of security and trust.

CyberBanner

Log in Register

Please Login to download this file

Username *
Password *
Remember Me

CyberBanner

Go to top