Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"All models are wrong, but some are useful".

If we were only engaging in model-based inference, then we, too, would always be hallucinating. But the very thing you're pointing out -- that we act differently when our internal model is wrong vs. when it is right -- is the crux of the difference. We use models, but then we have the ability to immediately test the output of those models for correctness, because we have semantic, not just syntactic, awareness of both the input and output data. We have criteria for determining the accuracy of what our model is producing.

LLMs don't, and are only capable of engaging in stochastic inference from the pre-defined model, which solely represents syntactic patterns, and have no ability to determine whether anything they output is semantically correct.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: