Mistral Medium 3.5: The 128B Open-Weight AI That Lets You Fire Your SaaS Stack
# Mistral Medium 3.5: The 128B Open-Weight AI That Lets You Fire Your SaaS Stack
> **Quick answer:** Mistral Medium 3.5 is a 128-billion parameter open-weight AI model released April 29, 2026. It scores 77.6% on SWE-Bench Verified — beating Devstral 2 and outrunning Qwen3.5 397B — while costing 40% less than GPT-4o per input token. It pairs with a new Work Mode for agentic office tasks and cloud-based coding agents in Vibe that can open pull requests autonomously.
Mistral Medium 3.5 landed on April 29, 2026, and the AI benchmark race just got more complicated for OpenAI and Anthropic. The Paris-based startup released a dense 128B open-weight model that combines instruction-following, reasoning, and coding in a single set of weights — and made it self-hostable on as few as four GPUs. If you're paying per token to a proprietary API and running more than 50 million tokens a day, Mistral just handed you an exit ramp.
## What Mistral Medium 3.5 Actually Is
Medium 3.5 is Mistral's new flagship model, replacing both Devstral 2 and Magistral in their lineup. It is a **dense 128B model** — not a mixture-of-experts (MoE) — which matters for deployment: you know exactly what compute you're paying for and there are no sparse routing surprises in production.
Key specs at a glance: