News

While Meta's recently exposed AI policy explicitly permitted troubling sexual, violent, and racist content, Anthropic adopted ...
Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking ...
Anthropic says new capabilities allow its latest AI models to protect themselves by ending abusive conversations.
Claude AI adds privacy-first memory, extended reasoning, and education tools, challenging ChatGPT in enterprise and developer ...
Though we fortunately haven't seen any examples in the wild yet, many academic studies have demonstrated it may be possible ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
Google and Anthropic are racing to add memory and massive context windows to their AIs right as new research shows that ...
While an Anthropic spokesperson confirmed that the AI firm did not acquire Humanloop or its IP, that’s a moot point in an ...
The new context window is available today within the Anthropic API for certain customers — like those with Tier 4 and custom ...
A federal judge in California has denied a request from Anthropic to immediately appeal a ruling that could place the ...
Notably, Anthropic is also offering two different takes on the feature through Claude Code. First, there's an "Explanatory" ...