Key Takeaway
On April 27, Tencent Hunyuan team officially released Hy3 Preview—a 295B MoE (21B active parameters), 256K context window, 40% inference efficiency improvement open-source model for Agent, coding, and deep reasoning.
But more noteworthy is The Information’s exclusive report on April 28: according to two insiders and Tencent internal memos, Tencent employees used Anthropic’s Claude to assist in evaluating and fine-tuning Hy3—despite Anthropic not providing services to China.
Hy3 Preview Technical Specs
| Parameter | Value |
|---|---|
| Architecture | MoE (Mixture of Experts) |
| Total Parameters | 295B |
| Active Parameters | 21B |
| Context Window | 256K |
| Inference Efficiency | 40% improvement over previous generation |
| Open Source | Yes |
| Positioning | Agent, coding, deep reasoning |
The Information’s Key Revelations
- Tencent used Claude to evaluate and fine-tune Hy3: Claude was used to help evaluate model output quality, generate fine-tuning data, and analyze model behavior
- Anthropic does not provide services to China: This means Tencent used unofficial channels to access Claude
- Tencent internal memos confirmed this practice: This is known and practiced at some scale within Tencent
Why This Matters
1. “Cross-Training” Model Development
Tencent using Claude to evaluate and optimize its own model is essentially using the industry’s strongest teacher model to train a student model. This is similar to DeepSeek’s knowledge distillation from GPT-4, but goes further—Claude directly participated in Hy3’s fine-tuning.
2. Geopolitical Signal
Anthropic explicitly does not provide services to China, but Tencent accesses Claude through unofficial channels. This reflects a reality: in AI model development, technical boundaries are far blurrier than trade boundaries.
3. Benchmark Independence Questions
If Hy3’s evaluation process used Claude, are there biases in benchmarks comparing Hy3 vs. Claude?
Domestic Model Competition Landscape
| Tier | Model | Evaluation |
|---|---|---|
| Entry | GLM-5.1 ≈ Kimi K2.6 | Top tier for Chinese coding |
| Below Entry | DeepSeek V4 Pro > Qwen 3.6 Max Preview | Close behind |
| Below Entry | MiMo V2.5 Pro > Qwen 3.6 Plus > Hy3 > Grok-4.20 | Hy3 preview not yet in Entry tier |
Tencent’s Other Open Source: Hy-MT1.5 Translation Model
On April 29, Tencent Hunyuan also open-sourced Hy-MT1.5-1.8B:
- 1.8B parameters compressed to 440MB (Sherry sparse ternary quantization)
- Fully offline on mobile
- Supports 33 languages, 1,056 translation directions
- Accepted at ACL 2026
Sources: