Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please publish sha256sums of the merged GGUFs in the model descriptions. Otherwise it's hard to tell if the version we have is the latest.


Yep we can do that probs add a table - in general be post in discussions of model pages - for eg https://huggingface.co/unsloth/MiniMax-M2.7-GGUF/discussions...

HF also provides SHA256 for eg https://huggingface.co/unsloth/MiniMax-M2.7-GGUF/blob/main/U... is 92986e39a0c0b5f12c2c9b6a811dad59e3317caaf1b7ad5c7f0d7d12abc4a6e8

But agreed it's probs better to place them in a table


Thanks! I know about HF's chunk checksums, but HF doesn't publish (or possibly even know) the merged checksums.


Oh for multi files? Hmm ok let me check that out


Why do you merge the GGUFs? The 50 GB files are more manageable (IMO) and you can verify checksums as you say.


I admit it's a habit that's probably weeks out of date. Earlier engines barfed on split GGUFs, but support is a lot better now. Frontends didn't always infer the model name correctly from the first chunk's filename, but once llama.cpp added the models.ini feature, that objection went away.

The purist in me feels the 50GB chunks are a temporary artifact of Hugging Face's uploading requirements, and the authoritative model file should be the merged one. I am unable to articulate any practical reason why this matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: