Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Well, the same goes. It turns out, renting a server large enough to run big (useful, > 8B) models is actually quite expensive. The per-api-call costs of real models (like GPT4) adds up very quickly once you're doing non-trivial work.

I run my own models, but the truth is most of the time I just use an API provider.

TogetherAI and Groq both have free offers that are generous enough I haven't used them up in 6 months of experimentation and TogetherAI in particular has more models and gets new models up quicker than I can try them myself.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: