Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is unethical. Humans have consistently underestimated the subjective experience of other beings. You may have good reasons for believing these systems are currently incapable of anything approaching consciousness, but how will you know if or when the threshold has been crossed? Are you confident you will have ceased using an abusive tone by then?

I don’t know if flies can experience pain. However, I’m not in the habit of tearing their wings off.



Do you apologize to table corners when you bump into them?


Likening machine intelligence to inert hunks of matter is not a very persuasive counterargument.


What if it's the same hunk of matter? If you run a language model locally, do you apologize to it for using a portion of its brain to draw your screen?


Do you think it’s risible to avoid pulling the wings off flies?


I am not comparing flies to tables.


Consciousness and pain is not an emergent property of computation. This or all the other programs on your computer are already sentient, because it would be highly unlikely it’s specific sequences of instructions, like magic formulas, that creates consciousness. This source code? Draws a chart. This one? Makes the computer feel pain.


Many leading scientists in artificial intelligence do in fact believe that consciousness is an emergent property of computation. In fact, startling emergent properties are exactly what drives the current huge wave of research and investment. In 2010, if you said, “image recognition is not an emergent property of computation”, you would have been proved wrong in just a couple of years.


> Many leading scientists in artificial intelligence do in fact believe that consciousness is an emergent property of computation.

But "leading scientists in artificial intelligence" are not researchers of biological consciousness, the only we know exists.


Just a random example on top of my head, animals don’t have language and show signs of consciousness, as does a toddler. Therefore consciousness is not an emergent property of text processing and LLMs. And as I said, if it comes from computation, why would specific execution paths in the CPU/GPU lead to it and not others? Biological systems and brains have much more complex processes than stateless matrix multiplication.


What the fuck are you talking about. If you think these matrix multiplication programs running on gpu have feelings or can feel pain you, I think you have completely lost it


"They're made out of meat" vibes.


Yeah I suppose. Haven't seen rack of servers express grief when someone is mean to them. And I am quite sure that I would notice at that point. Comparing current LLMs/chatbots whatever to anything resembling a living creature is completely ridiculous.


I think current LLM chatbots are too predictable to be conscious.

But I still see why some people might think this way.

"When a computer can reliably beat humans in chess, we'll know for sure it can think."

"Well, this computer can beat humans in chess, and it can't think because it's just a computer."

...

"When a computer can create art, then we'll know for sure it can think."

"Well, this computer can create art, and it can't think because it's just a computer."

...

"When a computer can pass the Turing Test, we'll know for sure it can think."

And here we are.

Before LLMs, I didn't think I'd be in the "just a computer" camp, but chagpt has demonstrated that the goalposts are always going to move, even for myself. I'm not smart enough to come up with a better threshold to test intelligence than Alan Turing, but chatgpt passes it and chatgpt definitely doesn't think.


Just consider the context window

Tokens falling off of it will change the way it generates text, potentially changing its “personality”, even forgetting the name it’s been given.

People fear losing their own selves in this way, through brain damage.

The LLM will go its merry way churning through tokens, it won’t have a feeling of loss.


That's an interesting point, but do you think you're implying that people who are content even if they have alzheimers or a damaged hippocampus aren't technically intelligent?


I don’t think it’s unfair to say that catastrophic conditions like those make you _less_ intelligent, they’re feared and loathed for good reasons.

I also don’t think all that many people would be seriously content to lose their minds and selves this way, but everyone is able to fear it prior to it happening, even if they lose the ability to dread it or choose to believe this is not a big deal.


Flies may, but files do not feel pain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: