Skip to main content

How can I activate stricter guardrails on Mistral AI models?

Updated over a year ago

Mistral AI offers a "safe mode" in the API, which can be activated by setting the safe_mode parameter to true. When safe mode is activated, a system prompt is added to the original prompt. This increases control over outputs, preventing potentially risky or inappropriate content.

Did this answer your question?