Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The important part of this is the "I feel like" bit. There's a fair but growing bit of research that the "fact" is more durable in your memory than the context, and over time, across a lot of information, you will lose some of the mappings and integrate things you "know" to be false into model of the world.

This more closely fits our models of cognition anyway. There is nothing really very like a filter in the human mind, though there are things that feel like them.



Maybe but then thats the same wether I talk to chatGPT or a human isnt it? except with chatgpt i instantly verify what im looking for, whereas with a human i cant do that.


I wouldn't assume that it's the same, no. For all we knock them unconscious biases seem to get a lot of work done, we do all know real things that we learned from other unreliable humans, somehow. Not a perfect process at all but one we are experienced at and have lifetimes of intuition for.

The fact that LLMs seem like people but aren't, specifically have a lot of the signals of a reliable source in some ways, I'm not sure how these processes will map. I'm skeptical of anyone who is confident about it in either way, in fact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: