DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
huggingface.co141 points by cmrdporcupine 3 hours ago
141 points by cmrdporcupine 3 hours ago
From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong?
Theoretically with streaming, any model that fit the disk can run on consumer hardware, just terribly slow.
Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.
If that is really so, it would be now be good enough to replace claude for us; we use sonnet only; with our setup, use cases and tooling it works as well as opus 4.6, 4.7 so far. We won't replace sonnet as long as they have subscriptions but it is good to have alternatives for when they force pay per use eventually.
The quality of this model vs the price is an insane value deal.
Models like Deepseek is the only reason we are able to categorize and measure quality of thousands of MCP servers (https://glama.ai/blog/2026-04-03-tool-definition-quality-sco...). That's billions of tokens – an expense that would be otherwise very hard to swallow.
Pricing: https://api-docs.deepseek.com/quick_start/pricing
"Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6
"Flash" is only $0.28 / 1M and seems quite competent
(EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)
I estimated that even with heavy usage it would cost your around 30-70$ depending on caching at around 40M tokens. That would give you around double the usage compared to gpt-5.5 on the 200$ sub
So the R line (R2) is discontinued or folder back into v4 right?
I believe the R stood for reasoning, just like OpenAI had their own dedicated o1/o3 family, but now every model just has it built-in.