Mistral Medium 3.5 Released: 128B Params, 256K Context, with Workflows Enterprise Orchestration Layer

Mistral Medium 3.5 Released: 128B Params, 256K Context, with Workflows Enterprise Orchestration Layer

What Happened

Mistral AI dropped two releases on April 30: new flagship model Medium 3.5 and Workflows enterprise orchestration layer.

Medium 3.5 Core Specs:

  • 128B parameters (dense architecture, not MoE)
  • 256K context window
  • Configurable reasoning effort
  • Unified instruction-following, reasoning, and coding
  • Modified MIT license open source

Workflows Orchestration Layer:

  • Built on Temporal
  • Define complex AI business processes in Python
  • Validated by ASML, ABANCA, CMA-CGM enterprise customers
  • Fills the gap of “having models but can’t run them reliably in production”

Why It Matters

Medium 3.5’s Positioning

Mistral chose a different path from competitors. While most vendors shift to MoE architecture, Mistral sticks with dense:

FeatureMistral Medium 3.5Qwen 3.6 (MoE)DeepSeek V4 (MoE)
ArchitectureDense 128BMoEMoE
Context256K256K128K
LicenseModified MITApache 2.0Open source
Configurable Reasoning
Inference CostHigherLowerLower

Dense architecture’s advantage: stronger output consistency and more predictable latency — more important for enterprise scenarios requiring deterministic outputs.

Workflows’ Enterprise Value

Enterprises don’t lack good models. They lack:

  1. Reliability: How to retry, degrade, or escalate on LLM failures?
  2. Observability: Who triggered what, spent how much, with what quality?
  3. Compliance: Does data flow meet audit requirements?

Workflows, built on Temporal, natively provides all three.

Competitive Landscape

Mistral is executing a “European Anthropic” strategy:

  • Model capability: Dense architecture aligns with Anthropic’s Claude philosophy
  • Enterprise product: Workflows targets LangGraph/Airflow but lighter
  • Open source strategy: Modified MIT retains commercial control while embracing community