All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.
There are a lot of things LLMs can do, and many they can't. Therapy is one of the things they could do but shouldn't... not yet, and probably not for a long time or ever.
I'm not referring to the study, but to the comments that are trying to make the case.
The study is about the present, using certain therapy bots and custom instructions to generic LLMs. It doesn't do much to answer "Can they work well?"
> All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.
They are correctly pointing out that many licensed therapists are bad, and many patients feel their therapy was harmful.
We know human therapists can be good.
We know human therapists can be bad.
We know LLM therapists can be bad ("OK, so just like humans?")
The remaining question is "Can they be good?" It's too early to tell.
I think it's totally fine to be skeptical. I'm not convinced that LLMs can be effective. But having strong convictions that they cannot is leaping into the territory of faith, not science/reason.
> The remaining question is "Can they be good?" It's too early to tell.
You're falling into a rhetorical trap here by assuming that they can be made better. An equally valid argument that can be made is 'Will they become even worse?'
Believing that they can be good is equally a leap of faith. All current evidence points to them being incredibly harmful.
+1 I also wanted to point out, if there are questions about validation of the point made... just look at the post.
And from my perspective this should be common sense, and not a scientific paper. A LLM will allways be a statistical token auto completer, even if it identifies different.
It is pure insanity to put a human with a already harmed psyche in front of this device and trust in the best.
It's also insanity to pretend this is a matter of "trust". Any intervention is going to have some amount of harm and some amount of benefit, measured along many dimensions. A therapy dog is good at helping many people in many ways, but I wouldn't just bring them into the room and "trust in the best".
... aren't we commenting on just such a study?
All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.
There are a lot of things LLMs can do, and many they can't. Therapy is one of the things they could do but shouldn't... not yet, and probably not for a long time or ever.