Core Discovery
DeepSeek V4’s delayed release sparked widespread speculation in late April. But according to a CCTV-affiliated account, this is not a development roadblock — it’s a deliberate strategic choice:
“Rather than rushing to launch to compete with Western frontier models, DeepSeek chose to deeply adapt V4 for China’s domestic chip ecosystem — specifically Huawei Ascend.”
On April 24, DeepSeek released a high-performance inference optimization practice report based on Huawei CANN, fully supporting Huawei Ascend Supernode. This confirms the strategic shift.
Why This is a Major Signal
Background: China’s AI Chip Dilemma
| Dimension | NVIDIA | Huawei Ascend |
|---|---|---|
| Training ecosystem | CUDA ecosystem mature, global standard | CANN ecosystem under construction |
| Availability | Restricted by US export controls | Available domestically |
| Inference performance | Industry benchmark | Rapidly catching up |
| Software compatibility | Almost all models natively supported | Requires custom adaptation |
For years, Chinese AI companies’ strategy was: train on NVIDIA, infer on domestic chips. DeepSeek V4’s choice means training is also shifting to domestic chips.
Technical Implications
DeepSeek V4 is a trillion-parameter MoE model with extremely high training infrastructure demands. If it can be trained on Ascend:
- Huawei Ascend’s training capability is validated: No longer “inference is fine, training is not”
- CUDA dependency can be broken: At least for MoE architectures, domestic chips have a viable path
- Strategic supply chain security: Against the backdrop of continued US export tightening, this is a defensive layout
DeepSeek’s Ascend Adaptation Progress
Based on the April 24 technical report:
| Stage | Date | Achievement |
|---|---|---|
| Ascend Day 0 | 2026.04.24 | High-performance inference optimization based on CANN |
| Supernode support | Same day | Full support for Huawei Ascend Supernode |
| Weights open-sourced | Preview released | Developers can test inference on Ascend |
Notably, DeepSeek simultaneously open-sourced model weights — this is not just for the tech community, but to enable Ascend ecosystem developers to adapt early.
Industry Impact
For Other Chinese Model Companies
DeepSeek’s choice may create a demonstration effect:
- If V4 performs well on Ascend: Other companies (Qwen, Kimi, GLM) may accelerate domestic chip adaptation
- If performance falls short: May reinforce the “NVIDIA is irreplaceable” narrative
For Huawei
- Major endorsement for Ascend ecosystem: Active adaptation by top AI companies is the best advertising
- Real-world test for CANN software stack: Training and inference of trillion-parameter MoE is an extreme test
For NVIDIA
- Further loss in Chinese market: Even supply-specific chips like H20 are being replaced
- Short-term impact limited: Global AI training still primarily relies on NVIDIA
Market Positioning
This is not just a model’s delayed release — it’s a watershed moment for China’s AI infrastructure roadmap.
For years, the default choice for Chinese AI companies was NVIDIA GPU + CUDA ecosystem. DeepSeek V4’s Ascend adaptation strategy means:
- Domestic substitution on the training side moves from “possible” to “practice”
- Coordinated AI-chip development becomes China’s AI strategic core
- US export controls instead accelerate China’s autonomous ecosystem construction
Actionable Takeaways
- Enterprise users: If your business operates in mainland China, watch DeepSeek V4’s actual performance on Ascend — this may influence your technology choices
- Developers: DeepSeek has open-sourced weights — try deploying and testing on Ascend environments early
- Investors: Huawei Ascend ecosystem’s investment value may be reassessed; watch related supply chain companies