Mistral’s latest feature automatically eliminates inappropriate content
The French technology firm Mistral AI has introduced a fresh moderation tool powered by the Ministral 8B artificial intelligence model, enabling the automatic detection and removal of offensive or unlawful posts.
The French technology firm Mistral AI has introduced a fresh moderation tool powered by the Ministral 8B artificial intelligence model, enabling the automatic detection and removal of offensive or unlawful posts. (Nonetheless, there is a slight possibility of errors.)
As reported by Techcrunch, certain studies have revealed that content related to individuals with disabilities might be identified as “negative” or “toxic” even when it doesn’t actually fit the description.
At its inception, Mistral’s newest moderation tool will cater to Arabic, English, French, Italian, Japanese, Chinese, Korean, Portuguese, Russian, Spanish, and German, with additional languages slated for inclusion in the near future. In July, Mistral introduced an extensive language model capable of swiftly producing extended code segments compared to other available open-source models.
