> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.
It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.
You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.
> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.
That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.
It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.
You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.