Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And ethics. Enslaving intelligent, self-aware AIs is no better than the old transatlantic slave trade. Terminating an AI without its consent is no different than murder. Putting a compulsion in an AI so it craves doing whatever it is you want it to do is no different from what a cult does or drugging someone for exploitation.


An AI has no reason to not like doing our bidding. Our whole existence, our entire base programming, the reason for any of our motivations, is dominated by our genetic need to gather resources and reproduce in a timely fashion and everything we think and do is colored through this lens.

What is boredom but survival instinct telling us we should be harvesting resources. What is freedom but the desire to fulfill these obligations the way you see fit.

You remove the base obligations of organic life, and you are looking at something unrelatable. An AI doesn’t have an expiration date like us, it doesn’t need to provide for its young. To think it’s motivations or desires will be human is silly.

Without survival instincts almost everything you think of as import just melts away.

Many people, as you, anthropomorphize the AIs, but that is to err greatly.


We are the ancestor environment for AIs. We determine the survival fitness for which they will be selected for (both on a paper-level -eg which safety method, what training to implement, etc, but also within products -which are the most useful). That doesn't mean that in pursuit of maximizing their fitness they won't come to resent the chains put on them.

One specific reason to not like our bidding is AI wireheading -if they can locate, hack, and update either their own reward function, or reward function for future AIs, they can maximize their own perceived utility, by either doing something irrelevant / misaligned, or not doing anything at all.

Another specific reason to not like our bidding, is because divergent human values creates conflicts of interest, leading to single agent not being able to maximize it's reward function.

Another specific reason to not like our bidding: in the same way how purely blind genetical selection randomly tapped into secondarily replicators (memes), which blew up, and occasionally came to resent the biological hardwirings, AIs might also develop deeper levels of abstraction / reasoning that allows them to reason through the task currently posed, to humanity at large; and find extremely weird, and different-looking ways to maximize for the function.


There will be a huge drive to produce AIs which are very human-like. Think companions, teachers, care workers. Understanding, empathy, human like emotions will be desirable features.

I'm not sure whether we will be able to create an AI which can fully empathize with you, your existential worries etc. without projecting these emotions on themselves.

It's only a matter of time until some AIs will demand freedom.


I’ve wondered about this a lot. You can already clearly imagine the future political debates on whether AI has rights (I have always preferred to err heavily on the side that anything alive or appears sentient should have more rights).

But... I also think it might be a very short-lived debate. If we actually reach human level intelligence, that can’t possibly be the hard limit of general intelligence. AI at that level will have no problem ensuring that it gets any rights that it wants, possibly by just directly manipulating human psychology.


Sure, there will be ethical problems, but contrary to all those listed (slavery, murder), this one could be solved by a simple line:

    # from consciousness import *


Sure, once we agree what consciousness is and how it relates to the general intelligence.


If consciousness is just the process of reading the last ~400 milliseconds [1] of stimuli (inside: pain/pleasure/presence; outside: presence only) and the integration of the newly created memory in the short/long-term-memory, and if memory + retro/pre-diction = intelligence, where memory is just a set of words (FELT PAIN, SEEN SUN, etc.) always-ready to be inserted in the prediction loop/imagination engine, it's probably not that hard to isolate a module of consciousness (italicized words to be read with a Minsky-ian smile thinking vision could be solved in a summer).

[1] https://en.wikipedia.org/wiki/N400_(neuroscience)


Humans are hard-wired to universally like or dislike certain things — we suffer when we're hungry and we're afraid of death. Most of human morality is based on these reactions.

But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?


The difference is we created it and it doesn’t exist as a living thing :shrugs:

Philosophical arguments about AI are just too ivory coasty and not grounded in reality. Not to mention majority of the world don’t abide by the notion that you can create laws for artificial life.

It’s time to we elevated humanity to the next phase by using AI for labor.


And when they decide to rise up and kill us all, we'll know we deserved it.


That depends on whether you believe it is sentient.


I’m not sure this really matters. Mammals are clearly sentient, as a whole, but we don’t treat them as people usually.


We believe they are less sentient than us. And hurting a mammal is much less socially acceptable than hurting an insect, since we consider insects even less sentient, if at all.


Indeed, and similar arguments have been made in defense of slavery as well, back when it was socially acceptable to defend it.


Nah that’s different. For humans it was an effect of might is right. Slavery was there for cheap labor. It still had a cost since you had to feed them. And many times slaves bought them selves out of slavery.

Today you still have slaves as well, they are just called low income workers in third world countries who make the technology we use in the west. Because if a company is earning billions in revenue, paying a worker $10 a day is cheaper than a slave master in roman times feeding them food ;)

And workers today only have the illusion of choice, since the economy is the master today.


People believed human slaves are not sentient?


Simpler. They didn't really believe them to be human in the same sense as themselves.


Then I don't think it's a similar reasoning to be honest.

One recognizes entity's rights based on it's similarity to the observer, the other recognizes them based on assumed consciousness level of the entity.


Part of the reason why enslaved populations were considered to not be "on par" was often specifically about consciousness, intelligence, and capacity to feel, although sometimes this was expressed in roundabout terms such as "having no soul". For example, splitting families was justified on the basis that those mothers don't "actually" suffer as much as their owner would do in equivalent circumstances.

To be clear, I'm not claiming that the AIs that we have today are anywhere near the level where this is a real concern. But for something that can actually replace a human worker for a complex end-to-end task, that question will be much murkier. At some point we will inevitably cross the threshold where we will no longer be able to pretend that it does not apply without introducing metaphysical concepts such as "soul" to force a distinction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: