Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are always ways to trade precision for speed in computer statistics models.


Sure, generally speaking. Is that true for static, fixed parameter count LLMs like GPT4?

I think you're hand waving a lot just to claim that OpenAI are (somehow) reducing accuracy of their models during high load. And I'm not sure why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: