Bottom Line Up Front
DeepSeek is playing two cards this round: price war continuation + domestic computing adaptation. The 75% API discount for V4-Pro has been extended from late April to May 31. Simultaneously, a native adaptation preview for Huawei Ascend chips has been released. This isn’t a simple promotion — it’s a strategically layered move under China’s AI infrastructure localization trend.
Data Comparison: V4-Pro’s Position in the Current Price War
| Model | Input Price (per M tokens) | Output Price (per M tokens) | Discount | Valid Until |
|---|---|---|---|---|
| DeepSeek V4-Pro (discounted) | ¥4 | ¥16 | 75% off | 2026-05-31 |
| DeepSeek V4-Pro (original) | ¥16 | ¥64 | — | — |
| Qwen 3.6 Plus | ¥3.5 | ¥14 | — | Ongoing |
| Kimi K2.6 | ¥5 | ¥20 | — | Ongoing |
| GLM-5 | ¥6 | ¥24 | — | Ongoing |
DeepSeek’s discounted pricing now approaches Qwen 3.6 Plus’s range, placing it in the first tier of low-cost domestic models. Given V4-Pro’s top-3 domestic ranking in multiple benchmarks, this cost-performance combination directly appeals to small-to-medium developers and high-volume API enterprises.
Strategic Significance of Huawei Ascend Adaptation
Key takeaways from DeepSeek’s Huawei chip adaptation:
- Breaking Nvidia dependency: Ascend adaptation means DeepSeek can deploy and run inference without high-end Nvidia GPUs. Amid ongoing US chip export restrictions, this is both a survival and market strategy.
- Government/enterprise market entry: Government, SOEs, and finance sectors with extreme data sovereignty requirements essentially have only one viable option: Huawei Ascend + domestic LLM. Early adaptation locks in this growth market.
- Inference cost differences: Ascend’s inference cost structure differs from GPUs, requiring deep operator-level optimization. The “preview” wording suggests performance and stability may need several iteration cycles to match Nvidia versions.
Landscape Assessment
China’s LLM competition is shifting from “whose model is stronger” to “whose ecosystem is more complete”:
- Price war is the entry ticket: DeepSeek’s discount strategy pushes API prices to ¥4/M tokens — competitors must follow or exit the API market.
- Chip adaptation is the moat: Whoever runs the full domestic chip stack first (training + inference) captures the government/enterprise market dividend. DeepSeek is one step ahead of Kimi and GLM here.
- The window is closing: Huawei Ascend’s developer ecosystem is maturing rapidly; other model vendors’ adaptations are expected by late Q2.
Action Recommendations
- API consumers: Lock in DeepSeek V4-Pro’s discount before May 31 — heavy usage saves 75%. Watch for price rebound after expiry.
- Government/enterprise operators: Track DeepSeek’s Ascend formal release timeline and begin integration evaluation early.
- Competitor users: This pricing combo poses attrition risk to Qwen and Kimi’s mid-to-long-tail customers.