News
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking ...
In a move that sets it apart from every other major AI assistant, Anthropic has given its most advanced Claude models the ...
Anthropic empowers Claude AI to end conversations in cases of repeated abuse, prioritizing model welfare and responsible AI ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
A research paper describes multistep reasoning and outlines how the model thinks before providing a response. In one instance ...
Claude AI of Anthropic now prohibits chats about nuclear and chemical weapons, reflecting the company's commitment to safety ...
Anthropic's Claude Sonnet 4 supports 1 million token context window, enables AI to process entire codebases and documents in ...
Anthropic says new capabilities allow its latest AI models to protect themselves by ending abusive conversations.
A recent study found that AI chatbots showed signs of stress and anxiety when users shared “traumatic narratives” about crime ...
Anthropic's popular coding model just became a little more enticing for developers with a million-token context window.
Anthropic’s Claude Code now features continuous AI security reviews, spotting vulnerabilities in real time to keep unsafe ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results