What Happened
Mistral AI dropped two releases on April 30: new flagship model Medium 3.5 and Workflows enterprise orchestration layer.
Medium 3.5 Core Specs:
- 128B parameters (dense architecture, not MoE)
- 256K context window
- Configurable reasoning effort
- Unified instruction-following, reasoning, and coding
- Modified MIT license open source
Workflows Orchestration Layer:
- Built on Temporal
- Define complex AI business processes in Python
- Validated by ASML, ABANCA, CMA-CGM enterprise customers
- Fills the gap of “having models but can’t run them reliably in production”
Why It Matters
Medium 3.5’s Positioning
Mistral chose a different path from competitors. While most vendors shift to MoE architecture, Mistral sticks with dense:
| Feature | Mistral Medium 3.5 | Qwen 3.6 (MoE) | DeepSeek V4 (MoE) |
|---|---|---|---|
| Architecture | Dense 128B | MoE | MoE |
| Context | 256K | 256K | 128K |
| License | Modified MIT | Apache 2.0 | Open source |
| Configurable Reasoning | ✅ | ❌ | ❌ |
| Inference Cost | Higher | Lower | Lower |
Dense architecture’s advantage: stronger output consistency and more predictable latency — more important for enterprise scenarios requiring deterministic outputs.
Workflows’ Enterprise Value
Enterprises don’t lack good models. They lack:
- Reliability: How to retry, degrade, or escalate on LLM failures?
- Observability: Who triggered what, spent how much, with what quality?
- Compliance: Does data flow meet audit requirements?
Workflows, built on Temporal, natively provides all three.
Competitive Landscape
Mistral is executing a “European Anthropic” strategy:
- Model capability: Dense architecture aligns with Anthropic’s Claude philosophy
- Enterprise product: Workflows targets LangGraph/Airflow but lighter
- Open source strategy: Modified MIT retains commercial control while embracing community