In late April 2026, DeepSeek released open-source model V4, reaching 1.6 trillion parameters with support for up to 1 million token context windows. This is one of the largest-scale open-source models globally, matching top-tier closed-source systems in both parameter count and context capability.
Core Specifications
Version
Features
Price
V4
Full 1.6T parameter version
Open source
V4-Pro
Optimized for production
~$1,071 (AI Index evaluation cost)
V4-Flash
Lightweight high-speed version
~1/166 of GPT-5.5 pricing
Impact on Open Source Ecosystem
Lower self-deployment barriers: Enterprises can run near-top-tier models on their own infrastructure
Reduced fine-tuning costs: Open weights enable vertical domain adaptation without training from scratch
Context window race: 1M token capability enables long document processing and codebase understanding
Comparison with Same-Week Competitors
Scenario
Best Choice
Key Metric
Code generation
Claude Opus 4.7
SWE-Bench 87.6%
Complex reasoning
GPT-5.5
Terminal-Bench 82.7%
Cost-effectiveness
DeepSeek V4-Flash
1/166 of GPT-5.5 price
Scale
DeepSeek V4
1.6T parameters
Quick Start
pip install transformers# Or use DeepSeek official SDK for API calls