Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is that everyone knows about hallucinations, so an LLM never can be trusted by default, they still trusted it blindly.


Is that really worse than trusting blindly the code of a human, about which "everyone knows about" the bugs that humans write as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: