Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

DeepSeek’s docs say V4 has a 1M context length. Is that actually usable in practice, or just the model/API limit?

Codex shows ~258k for me and Claude Code often shows ~200k, so I’m curious how DeepSeek is exposing such a large window.



They have added a lot of optimization focussing on the KV-cache, so they can have a much larger window without eating all the VRAM.

The 1M window might be usable, but it will probably underperform against a smaller window of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: