AI startup Mistral has launched an API to moderate possibly toxic — or otherwise problematic — text in a range of languages.
Mistral AI launches a powerful multilingual content moderation API to challenge OpenAI, addressing growing concerns about AI safety with advanced tools to detect harmful content across nine categories ...
This API powers the moderation service in Mistral’s Le Chat. Powered by a fine-tuned model (Ministral 8B), it can be tailored to specific applications and safety standards. Mistral is releasing two ...
French start-up Mistral AI has launched a new API for content moderation, which it claims can be tailored to various safety standards.
Mistral AI is trying to position itself in the AI field as the secure alternative to OpenAI and other AI tools. The company ...
The moderation API is powered by a fine-tuned version of Mistral AI's Ministral 8B and is designed to detect potentially harmful content across nine different categories, including sexual content ...
The Mistral API can classify text into nine different categories. These include sexual content, hate and discrimination, ...
Mistral claims that its moderation model is highly accurate — but also admits it's a work in progress. Notably, the company didn't compare its API's performance to other popular moderation APIs ...