Skip to main content
All CollectionsLa Plateforme
How can I activate stricter guardrails on Mistral AI models?
How can I activate stricter guardrails on Mistral AI models?
Updated over a week ago

Mistral AI offers a "safe mode" in the API, which can be activated by setting the safe_mode parameter to true. When safe mode is activated, a system prompt is added to the original prompt. This increases control over outputs, preventing potentially risky or inappropriate content.

Did this answer your question?