>I don’t think it even requires that AI to be sentient or malicious. The humans already are.
We are and we aren't. I was struck by this line in the OP:
>AI is manifestly different from any other technology humans have ever created, because it could become to us as we are to orangutans;
As far as I can tell, we humans treat orangutans quite kindly. I.e., on the whole, we don't go around killing them indiscriminately or ignoring them to the point of rolling over them in pursuit of some goal of our own.
The arc of human history is marked by expanding the moral circle to include animals. We take more care, and care more about them, than we ever have in human history. Further, we have a notion of 'protected species'.
What's preventing us from engineering these principles into GPT-5+n.
We’ve wiped out over 60% of the orangutan population in the last 16 years. We’re literally burning them alive to replace their habitat with palm oil plantations. [0]
We currently kill more animals on a daily basis than we have at any point in human history, and we are doing this at an accelerating rate as human population increases.
The cruelty we inflict on them in industry for food, clothing, animal testing, and casually as collateral damage in our pursuit of exploiting natural resources or disposing of our waste is unimaginable.
None of this is kindness. There are movements to address these issues but so far they represent the minority of action in this space, and have not come close to eclipsing the negative of our relationship to the rest of life on Earth in our present day.
All this is just to say that we absolutely do not want another being to treat us the way we treat other beings.
As to whether AI poses a genuine risk to us in the short term, I’m unsure. In the OP and EY’s article, there was something about Homo sapiens vs Australopithecus.
If it’s one naked Homo sapiens dropped into the middle of 8 billion Australopithecus I’m not too worried about the Australopithecus.
Right, but as you point out, these issues are hotly contested and actively debated. Yes, it may be a minority position at present, but so was the idea of not torturing cats for fun, not to mention abolition, back in the day.
So, you’re content with GPT-4 killing 60% of humans to create paper clips as long as the matter is hotly contested and actively debated within its matrices?
The focus on paper clip maximizers is always curious to me. A lot more people are willing to turn a blind eye to or debate suffering when the object is maximizing money.
Humans may not engage in direct violence against orangutans, but will certainly roll over them:
> The wholesale destruction of rainforests on Borneo for the palm oil plantations of Bumitama Gunajaya Agro (BGA) is threatening the survival of orangutans
The problem is that we do not know "How" to engineer those principles. And that's what the entire field of AI alignment is working on. We know what we want the AI to do; the problem is we don't know how to make certain it does that. Because if we only get it 99% right then we're probably all dead in the end.
We are and we aren't. I was struck by this line in the OP:
>AI is manifestly different from any other technology humans have ever created, because it could become to us as we are to orangutans;
As far as I can tell, we humans treat orangutans quite kindly. I.e., on the whole, we don't go around killing them indiscriminately or ignoring them to the point of rolling over them in pursuit of some goal of our own.
The arc of human history is marked by expanding the moral circle to include animals. We take more care, and care more about them, than we ever have in human history. Further, we have a notion of 'protected species'.
What's preventing us from engineering these principles into GPT-5+n.