Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd expect that any kind of logistics would need tight bounds on min-max 'performance', and not be up to the whims of artificial stupidity. Am I wrong?


"Training a model" is just a different name for "optimization".

The typical logistics optimization system is "smart" because it's optimizing exactly what it is supposed to. LLMs here would not be "smart" as they are optimizing toward a different target (human-like production of language-like text), and using them for things they're not specifically trained to do is indeed stupid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: