Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't believe there's any way for a LLM operating alone to recognize when it doesn't know something, because it has no concept of knowing something.

Its one job is to predict the next word in a body of text. That's it. It's good enough at it that for at least half the people here it passes the Turing test with flying colors, but the only kind of confidence level it has is confidence in its prediction of the next word, not confidence in the information conveyed.

If we were to take a language model and hook it up to another system that does have a concept of "knowing" something, I could see us getting somewhere useful—essentially a friendly-to-use search engine over an otherwise normal database.



> Its one job is to predict the next word in a body of text.

“Predicting the next word” and “writing” are the same thing; you’re just saying it writes answers in text. There’s nothing about that preventing it from reasoning, and its training goal was more than just “predict the next word” anyway.


I don't know if I buy this. It feels like your confidence in what you say is closely tied to "knowing". I'm sure there is more research to do here, but I'm not sure if there is a need to "tie" it to some other system. As it stands today there are definitely things ChatGPT doesn't know and will tell you so. For example, I asked it, why did Donald Trump spank his kids -- and it said, "I do not have information about the parenting practices of Donald Trump".

That said, there are a lot of things it does get wrong, it would be nice for it be better at those. But I do think that, maybe much like humans, there will always be statements it makes, which are not true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: