On April 20, Alibaba released the preview version of its next-generation Qianwen flagship model, Qwen3.6-Max-Preview, surpassing GLM-5.1 and MiniMax-M2.7 in the authoritative Artificial Analysis evaluation to become the top domestic model.
Key Improvements
| Benchmark | Improvement |
|---|---|
| SkillsBench (Agent Programming) | +9.9 points |
| SciCode (Scientific Coding) | +10.8 points |
| NL2Repo | +5.0 points |
| Terminal-Bench | Significant improvement |
Closed-Source Preview, Not Yet Open-Sourced
This Max-Preview is released in closed-source form, with weights not made public. API access is available only through Alibaba Cloud’s Bailian platform and Qwen Studio. The Qwen 3.6 series has already launched three versions — Max-Preview, Plus, and Flash — and has open-sourced Qwen3.6-35B-A3B.
Cost-Performance Positioning
Within the high-end cost range of 100-250 yuan per thousand requests, Qwen3.6-Max-Preview’s comprehensive capabilities surpass competing Claude and GPT models at the same price point. Regular users can try it for free on Qwen Studio, while enterprises and developers can access the API via Bailian.
Qwen 3.6 Family
| Model | Type | Status |
|---|---|---|
| Qwen3.6-Max-Preview | Closed-source flagship preview | API available |
| Qwen3.6-Plus | Closed-source flagship | Released April 2 |
| Qwen3.6-Flash | Closed-source lightweight version | Live |
| Qwen3.6-35B-A3B | Open-source MoE | Open-sourced |
| Qwen3.6-27B | Open-source multimodal | Released April 22 |
The Qwen3.6-27B released on April 22 is also worth noting — a dense multimodal model with only 27 billion parameters that comprehensively outperforms its predecessor Qwen3.5-397B-A17B (with a total of 397 billion parameters) across multiple coding benchmarks, a classic case of “punching above its weight.”
Primary sources: ZOL, Shanghai Securities News, chinaz