LLMsVideo TranslationImage GenerationVideo Generation
AI News

Independent coverage of the latest AI tool updates, releases, and comparisons.

Categories

  • AI LLMs
  • AI Video Translation
  • AI Image Generation
  • AI Video Generation

Company

  • About
  • Contact

Resources

  • Sitemap
  • AI Glossary
  • Tool Comparisons
  • Facts / Grounding
  • llms.txt
  • XML Sitemap
© 2026 AI News. Independent editorial coverage. Not affiliated with any AI company.
AI LLMs

Mistral Ships Magistral Reasoning Models — 10x Faster Than Competitors

Magistral Small (24B, Apache 2.0) and Magistral Medium bring step-by-step reasoning to Mistral's lineup. Le Chat delivers Magistral responses at 10x the speed of competing reasoning models.

SM

Sarah Mueller

Tuesday, June 10, 2025·3 min read

Mistral released its Magistral reasoning models on June 10, 2025 — step-by-step reasoning models comparable to OpenAI's o3 and Gemini's thinking mode. Two variants shipped: Magistral Small (24B parameters, Apache 2.0) and Magistral Medium (larger, closed-source), according to Mistral AI.

Speed as Differentiator

Mistral's headline claim: Magistral runs at 10x the speed of competing reasoning models in Le Chat, their consumer interface. Where o3 or Gemini thinking mode might take 30 seconds on a complex query, Magistral aims for 3 seconds.

If accurate, this reframes reasoning models from "use when you need deep thinking" to "use all the time." Speed removes the primary friction that prevents widespread reasoning model adoption.

Open-Source Reasoning

Magistral Small at 24B parameters with an Apache 2.0 license is the first competitive open-source reasoning model. It's available on Hugging Face and runs locally on capable hardware. For developers who want chain-of-thought reasoning without cloud API dependencies, this is the option.

Magistral Medium is the more capable variant, available through Mistral's API, Le Chat, and partner clouds. It provides frontier-class reasoning at Mistral's characteristically competitive pricing.

Multilingual Reasoning

Both models support reasoning in eight languages: English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese. Most competing reasoning models are English-first with limited multilingual capability. Mistral's European roots show in the language coverage.

Updated Versions

Magistral 1.1 (July 24) and 1.2 (September 17) followed, with 1.2 adding image analysis to Magistral Small — making it fit on a MacBook while supporting multimodal reasoning. The rapid iteration suggests Mistral is investing heavily in this product line.

Our Take

Magistral is Mistral's best strategic move in 2025. An open-source reasoning model at 10x speed addresses two major market gaps simultaneously. The multilingual support is a genuine advantage for European enterprises that need reasoning in French, German, or Spanish. Whether the 10x speed claim holds under rigorous testing is the key question — but if it does, Magistral makes reasoning models practical for applications where latency matters.

FAQ

What is Magistral? Magistral is Mistral's family of reasoning models that perform step-by-step chain-of-thought reasoning. Magistral Small (24B) is open-source under Apache 2.0; Magistral Medium is Mistral's more capable closed-source variant.

Is Magistral open source? Magistral Small (24B parameters) is open-source under Apache 2.0 and available on Hugging Face. Magistral Medium is not open-source.

How fast is Magistral compared to o3? Mistral claims Magistral runs at 10x the speed of competing reasoning models in Le Chat. Independent benchmarks should verify this claim.

What languages does Magistral support? Magistral supports reasoning in eight languages: English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.

Tools Mentioned

MistralEuropean AI lab building efficient open and commercial LLMs
Usage-based API
GPT (OpenAI)Industry-leading large language models powering ChatGPT
$20/mo (ChatGPT Plus)
Claude (Anthropic)Safe, helpful AI assistant with extended context and reasoning
$20/mo (Pro)
Gemini (Google)Google's multimodal AI model family
$19.99/mo (Advanced)

More in AI LLMs

AI LLMs

Meta Launches Muse Spark — Its First Closed-Source Model Targets 'Personal Superintelligence'

Meta Superintelligence Labs unveils Muse Spark with dual modes, 58% on Humanity's Last Exam, and multimodal reasoning. Breaking with tradition, the model is not open-source.

Alex Chen·Apr 8, 2026
AI LLMs

OpenAI, Anthropic, and Google Unite to Combat AI Model Copying From China

The three biggest Western AI labs are sharing information through the Frontier Model Forum to prevent Chinese competitors from extracting their models' capabilities.

Sarah Mueller·Apr 7, 2026
← Back to all news