DeepSeek V4 series’ release rhythm has confused many—the model was ready, so why the multi-week delay? A CCTV-affiliated social media account provides the answer: This is not a technical delay, but a strategic choice. DeepSeek deliberately postponed its release to achieve deep alignment with China’s domestic chip ecosystem.
What Happened
DeepSeek V4 Pro achieved parity with GPT-5.2 on the FoodTruck Bench, while inference costs are just 1/17 of its US counterpart. This is the first Chinese model to reach the frontier tier on this benchmark.
Key data:
| Dimension | DeepSeek V4 Pro | GPT-5.2 | Gap |
|---|---|---|---|
| FoodTruck Bench | Parity | Baseline | 0 |
| Inference Cost | $0.11/million tokens | ~$1.87/million tokens | 17x cheaper |
| Release Timing | Delayed 10 weeks | Normal cadence | Strategic delay |
| Chip Alignment | Domestic chips prioritized | NVIDIA exclusive | Divergent paths |
More noteworthy is the API pricing strategy: through May 31, DeepSeek V4 series APIs carry a 75% discount across the board, with input tokens at just $0.11/million. This pricing has virtually no competitors among open-source models.
Why the 10-Week Delay?
Reuters, citing a CCTV-affiliated account, reports that DeepSeek V4’s delayed release points to a clear strategic shift: deeper integration with China’s domestic chip ecosystem.
This is not a simple “support domestic” slogan, but a concrete engineering decision:
- Compute supply chain security: US export controls on high-end GPUs continue to tighten, making model training and inference dependent on NVIDIA chips vulnerable to supply disruption
- Cost structure restructuring: Domestic chips’ procurement and maintenance costs are far lower than imported GPUs, and inference-side price advantages are directly passed on to users
- Ecosystem binding: Deep model-chip adaptation means higher efficiency and lower latency, creating a positive feedback loop
In other words, DeepSeek traded 10 weeks for a critical capability: running at US-model-equivalent performance on domestic chips, at just 1/17 the cost.
Landscape Assessment
US-China AI competition is shifting from “model capability gap” to “compute ecosystem gap.”
US Path: NVIDIA GPUs + closed-source models + premium cloud services. Advantage: leading performance, mature toolchains; Disadvantage: high costs, hardware supply constraints.
China Path: Domestic chips + open-source models + low-cost APIs. Advantage: extremely low costs, supply chain autonomy; Disadvantage: toolchain ecosystem still under construction, international market recognition pending.
DeepSeek V4 Pro’s signal is clear: Chinese open-source models are pursuing a “performance parity + cost dominance” route. The FoodTruck Bench results prove the performance gap has narrowed to just 10 weeks, while the 17x cost differential is the core weapon for commercialization.
How to Use It
| Scenario | Recommendation |
|---|---|
| Large-scale API calls (log processing, batch translation) | DeepSeek V4 Pro API at $0.11/million tokens is the top choice, especially with the 75% discount |
| Data-sensitive scenarios | Open-source version available on Hugging Face (MIT license), deployable on domestic chip servers |
| Agent backend | 1M context window + low cost, ideal as an Agent’s primary model |
| Comparative testing | Run FoodTruck Bench or SWE-bench for side-by-side comparison with GPT-5.2 |
Three-Judge Assessment
Increment: FoodTruck Bench parity with GPT-5.2 + 17x cost advantage + domestic chip strategic pivot—three dimensions of new information.
Noise: The 10-week delay may also include training optimization factors, not just chip adaptation. FoodTruck Bench itself is not the industry’s most mainstream benchmark; combine with SWE-bench, MMLU, etc. for comprehensive assessment.
Signal: A 17x cost gap cannot be explained by short-term promotions alone. When model performance reaches parity while costs differ by an order of magnitude, the commercial landscape will shift rapidly.
Sources: Reuters - DeepSeek V4 China Chips | CCTV Related Report | DeepSeek API