Tencent Hy3 Preview Released, The Information Reveals Claude "Shadows" Behind It

Tencent Hy3 Preview Released, The Information Reveals Claude "Shadows" Behind It

Key Takeaway

On April 27, Tencent Hunyuan team officially released Hy3 Preview—a 295B MoE (21B active parameters), 256K context window, 40% inference efficiency improvement open-source model for Agent, coding, and deep reasoning.

But more noteworthy is The Information’s exclusive report on April 28: according to two insiders and Tencent internal memos, Tencent employees used Anthropic’s Claude to assist in evaluating and fine-tuning Hy3—despite Anthropic not providing services to China.

Hy3 Preview Technical Specs

ParameterValue
ArchitectureMoE (Mixture of Experts)
Total Parameters295B
Active Parameters21B
Context Window256K
Inference Efficiency40% improvement over previous generation
Open SourceYes
PositioningAgent, coding, deep reasoning

The Information’s Key Revelations

  1. Tencent used Claude to evaluate and fine-tune Hy3: Claude was used to help evaluate model output quality, generate fine-tuning data, and analyze model behavior
  2. Anthropic does not provide services to China: This means Tencent used unofficial channels to access Claude
  3. Tencent internal memos confirmed this practice: This is known and practiced at some scale within Tencent

Why This Matters

1. “Cross-Training” Model Development

Tencent using Claude to evaluate and optimize its own model is essentially using the industry’s strongest teacher model to train a student model. This is similar to DeepSeek’s knowledge distillation from GPT-4, but goes further—Claude directly participated in Hy3’s fine-tuning.

2. Geopolitical Signal

Anthropic explicitly does not provide services to China, but Tencent accesses Claude through unofficial channels. This reflects a reality: in AI model development, technical boundaries are far blurrier than trade boundaries.

3. Benchmark Independence Questions

If Hy3’s evaluation process used Claude, are there biases in benchmarks comparing Hy3 vs. Claude?

Domestic Model Competition Landscape

TierModelEvaluation
EntryGLM-5.1 ≈ Kimi K2.6Top tier for Chinese coding
Below EntryDeepSeek V4 Pro > Qwen 3.6 Max PreviewClose behind
Below EntryMiMo V2.5 Pro > Qwen 3.6 Plus > Hy3 > Grok-4.20Hy3 preview not yet in Entry tier

Tencent’s Other Open Source: Hy-MT1.5 Translation Model

On April 29, Tencent Hunyuan also open-sourced Hy-MT1.5-1.8B:

  • 1.8B parameters compressed to 440MB (Sherry sparse ternary quantization)
  • Fully offline on mobile
  • Supports 33 languages, 1,056 translation directions
  • Accepted at ACL 2026

Sources: