I very much agree. I've been telling folks in trainings that I do that the term "artificial intelligence" is a cognitohazard, in that it pre-consciously steers you to conceptualize a LLM as an entity.
LLMs are cool and useful technology, but if you approach them with the attitude you're talking with an other, you are leaving yourself vulnerable to all sorts of cognitive distortions.
It certainly isn't helped by the RLHF and chat interface encouraging this. LLM providers have every incentive to make their users engage it like an other. It was much harder to accidentally do when it was just a completion UI and not designed to roleplay as a person.
I don't think that is actually a problem. For decades people have believed that computers can't be wrong. Why, now, suddenly, would it be worse if they believed the computer wasn't a computer?
The larger problem is cognitive offloading. The people for whom this is a problem were already not doing the cognitive work of verifying facts and forming their own opinions. Maybe they watched the news, read a Wikipedia article, or listened to a TEDtalk, but the results are the same: an opinion they felt confident in without a verified basis.
To the extent this is on 'steroids', it is because they see it as an expert (in everything) computer and because it is so much faster than watching a TED talk or reading a long form article.
It can also dispense agreeable confirmation on tap, with very little friction and hardly any chance of accidentally encountering something unexpected or challenging. Even TED talks occasionally have a point of view that isn't perfectly crafted for each hearer.
LLMs are cool and useful technology, but if you approach them with the attitude you're talking with an other, you are leaving yourself vulnerable to all sorts of cognitive distortions.