Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I have experienced this multiple times - for example, ChatGPT and Bard giving totally opposite answers to the same questions. So I learned to take their answers with a pinch of salt and ensure to further research (either Google it, or ask another Conv-AI tool) when I start to smell something wrong.

IMO, these Conv-AI tools should indicate to the user when they are hallucinating.



> IMO, these Conv-AI tools should indicate to the user when they are hallucinating.

If they could do that, then it would be fairly easy to "not hallucinate".

To put it another way the "don't hallucinate" problem and the "warn me if you're hallucinating" problem are in the same difficulty class.


Like many human habitual bullshitters, they do not even know when they're doing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: