Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They do, but that's kind of the article's point - someone still has to write and maintain the per-model chat template and tool call parsing inside vllm/sglang. Every time a new model ships with a slightly different format, the inference server needs an update. The M×N problem doesn't disappear, it just gets pushed one layer down.
 help





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: