Mistral Ships Magistral Reasoning Models — 10x Faster Than Competitors
Magistral Small (24B, Apache 2.0) and Magistral Medium bring step-by-step reasoning to Mistral's lineup. Le Chat delivers Magistral responses at 10x the speed of competing reasoning models.
Sarah Mueller
Mistral released its Magistral reasoning models on June 10, 2025 — step-by-step reasoning models comparable to OpenAI's o3 and Gemini's thinking mode. Two variants shipped: Magistral Small (24B parameters, Apache 2.0) and Magistral Medium (larger, closed-source), according to Mistral AI.
Speed as Differentiator
Mistral's headline claim: Magistral runs at 10x the speed of competing reasoning models in Le Chat, their consumer interface. Where o3 or Gemini thinking mode might take 30 seconds on a complex query, Magistral aims for 3 seconds.
If accurate, this reframes reasoning models from "use when you need deep thinking" to "use all the time." Speed removes the primary friction that prevents widespread reasoning model adoption.
Open-Source Reasoning
Magistral Small at 24B parameters with an Apache 2.0 license is the first competitive open-source reasoning model. It's available on Hugging Face and runs locally on capable hardware. For developers who want chain-of-thought reasoning without cloud API dependencies, this is the option.
Magistral Medium is the more capable variant, available through Mistral's API, Le Chat, and partner clouds. It provides frontier-class reasoning at Mistral's characteristically competitive pricing.
Multilingual Reasoning
Both models support reasoning in eight languages: English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese. Most competing reasoning models are English-first with limited multilingual capability. Mistral's European roots show in the language coverage.
Updated Versions
Magistral 1.1 (July 24) and 1.2 (September 17) followed, with 1.2 adding image analysis to Magistral Small — making it fit on a MacBook while supporting multimodal reasoning. The rapid iteration suggests Mistral is investing heavily in this product line.
Our Take
Magistral is Mistral's best strategic move in 2025. An open-source reasoning model at 10x speed addresses two major market gaps simultaneously. The multilingual support is a genuine advantage for European enterprises that need reasoning in French, German, or Spanish. Whether the 10x speed claim holds under rigorous testing is the key question — but if it does, Magistral makes reasoning models practical for applications where latency matters.
FAQ
What is Magistral? Magistral is Mistral's family of reasoning models that perform step-by-step chain-of-thought reasoning. Magistral Small (24B) is open-source under Apache 2.0; Magistral Medium is Mistral's more capable closed-source variant.
Is Magistral open source? Magistral Small (24B parameters) is open-source under Apache 2.0 and available on Hugging Face. Magistral Medium is not open-source.
How fast is Magistral compared to o3? Mistral claims Magistral runs at 10x the speed of competing reasoning models in Le Chat. Independent benchmarks should verify this claim.
What languages does Magistral support? Magistral supports reasoning in eight languages: English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.