Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> cheapest AI model when asking questions and then switch to the more expensive if it doesn't work.

The thing is, more expensive isn't guaranteed to be better. The more expensive models are better most of the time, but not all the time. I talk about this more in this comment https://news.ycombinator.com/item?id=42313401#42313990

Since LLMs are non-deterministic, there is no guarantee that GPT-4o is better than GPT-4o mini. GPT-4o is most likely going to be better, but sometimes the simplicity of GPT-4o mini makes it better.



As you say, the more expensive models are better most of the time.

Since we can't easily predict which model will actually be better for a given question at the time of asking, it makes sense to stick to the most expensive/powerful models. We could try, but that would be a complex and expensive endeavor. Meanwhile, both weak and powerful models are already too cheap to meter in direct / regular use, and you're always going to get ahead with the more powerful ones, per the very definition of what "most of the time" means, so it doesn't make sense to default to a weaker model.


For regular users I agree, for businesses, it will have to be a shotgun approach in my opinion.

Edit:

I should add, for businesses, it isn't about better, but more about risk as the better model can still be wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: