DeepSeek V4: The 1.6T Parameter Open Model That Brought Frontier Model Prices Down

DeepSeek V4: The 1.6T Parameter Open Model That Brought Frontier Model Prices Down

DeepSeek open-sourced V4 less than 48 hours after GPT-5.5’s release. Two versions—DeepSeek-V4-Pro (1.6T parameters) and DeepSeek-V4-Flash (284B parameters)—both use MoE architecture. V4-Pro activates only 49B parameters per inference while maintaining a 1.6T parameter total. Apache 2.0 license means direct commercial use.

API pricing: DeepSeek V4 Pro at $2.20/M input and $3.48/M output—roughly 1/7 of Claude Opus 4.7 and 1/9 of GPT-5.5. Notably, this price includes all V4 capabilities without needing to purchase “thinking mode” or “high reasoning” variants separately.

Performance Positioning

DimensionDeepSeek V4 ProClaude Opus 4.7GPT-5.5
Composite Score~8.278.728.80+
CodeforcesNew record--
Vibe Code Benchmark#1--
Multilingual Engineering67%~70%-
Thinking Mode Tasks~8.90--

V4 took #1 on Vibe Code Benchmark, beating closed-source Gemini 3.1 Pro and runner-up Kimi K2.6. In composite evaluation, V4 Pro scored approximately 8.27, in the same tier as Claude Opus 4.7 (8.72)—a gap of less than 0.5 points at 1/7 the price.

Industry Impact

DeepSeek V4 may mark the beginning of “cost pressure transmission” in the model industry. Previously, open models competed on “cheap but slightly behind.” V4 has crossed the threshold from “usable” to “good” on most benchmarks while costing a fraction of closed-source alternatives.

Action Items

  • API integration: V4’s pricing lets most small teams reduce LLM costs to negligible levels. Try V4-Flash for daily tasks, V4-Pro for complex reasoning.
  • Local deployment: Apache 2.0 enables private deployment. Flash version (284B) runs on a single A100.
  • Long context: 1M context window worth testing for document analysis and multi-file code understanding.

Sources