IBM’s GenAI tool “Bob” is vulnerable to indirect prompt injection attacks in beta testingCLI faces prompt injection risks; ...
There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do ...
In December 2025, a feature called Connectors finally moved out of beta and into general availability. This feature allows ...
Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security ...
Security researchers from Radware have demonstrated techniques to exploit ChatGPT connections to third-party apps to turn ...
That's according to researchers from Radware, who have created a new exploit chain it calls "ZombieAgent," which demonstrates ...
ChatGPT vulnerabilities allowed Radware to bypass the agent’s protections, implant a persistent logic into memory, and ...
CrowdStrike's 2025 data shows attackers breach AI systems in 51 seconds. Field CISOs reveal how inference security platforms ...
While the shortest distance between two points is a straight line, a straight-line attack on a large language model isn't always the most efficient — and least noisy — way to get the LLM to do bad ...
Researchers studying AI chatbots and ChatGPT behaviour have found that the popular AI model can display anxiety-like patterns ...