DeepSeek quietly rolls out 'V4 Lite' update — expanded context handling, no official announcement
On March 9, Chinese tech media reported that DeepSeek's production models had received an update with expanded context handling — community members are calling it "V4 Lite." DeepSeek has not confirmed that name, any specifications, or a connection to the anticipated V4 full release. The update follows a February 11 silent expansion to 1M tokens. Full DeepSeek V4 has missed multiple expected release windows: mid-February, Lunar New Year, late February, and early March. Unverified internal reports cite V4 scoring 90% on HumanEval and exceeding 80% on SWE-bench Verified, though these remain unconfirmed.
DeepSeek is staging infrastructure changes in production before the V4 announcement — the same pattern they used before V3. The silence is tactical: DeepSeek V3's no-warning release in late 2024 generated maximum market impact. If V4 arrives with the reported Engram conditional memory architecture and ~37B active parameter MoE at 1M context, it would be directly competitive with GPT-5.4 at a fraction of the inference cost — a repeat of the V3 market disruption that briefly rattled Nvidia's stock price.
Every story from each day, delivered at midnight UTC.