Qwen 3.6 Open-Source Review: 35B MoE Model Approaches Claude 4.5 Opus in Coding

Qwen 3.6 Open-Source Review: 35B MoE Model Approaches Claude 4.5 Opus in Coding

Alibaba’s Qwen3.6 series recently went open-source, including Qwen3.6-27B (dense) and Qwen3.6-35B-A3B (MoE). This update brings significant improvements in coding, context window, and architectural efficiency.

Model Specifications

SpecQwen3.5-27BQwen3.6-27BQwen3.6-35B-A3B
ArchitectureDenseDenseMoE (3B active)
ContextDefault262K tokens262K tokens
Extended ContextUp to 1.01MUp to 1.01M

Qwen3.6-35B-A3B has 35B total parameters but only activates 3B during inference, making deployment costs close to a 3B model while achieving 30B+ level performance.

Benchmark Results

ModelSWE-bench Verified
Qwen3.5-27B75.0
Qwen3.5-397B-A17B76.2
Qwen3.6-35B-A3BNear Claude 4.5 Opus
Gemma4-31B52.x

Qwen3.6-35B-A3B’s coding Agent capability approaches Claude 4.5 Opus — the first time an open-source model has come this close in this dimension.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3.6-35B-A3B", torch_dtype="auto", device_map="auto")

Hardware: Qwen3.6-35B-A3B runs on a single A100 40GB card thanks to MoE efficiency.


Main sources: