Humans are hard-wired to universally like or dislike certain things — we suffer when we're hungry and we're afraid of death. Most of human morality is based on these reactions.
But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?
But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?