Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you just want to run a local LLM you could download ollama and do it in minutes. You'll be limited to small models (I would start with qwen3:1.7b) but it should be quite fast.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: