Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The uncensored models confirm the biases present in the input data. That may or may not correspond to more "correct" output.


Can you offer any example where the censored answer would be more correct than the uncensored when you are asking for a falsifiable/factual response, and not just an opinion? I couldn't really care less what the chatbots say in matters of opinion/speculation, but I get quite annoyed when the censorship gets in the way of factual queries, which it often does! And this is made even worse because I really can't envision a [benevolent] scenario where said censorship is actually beneficial.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: