Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

one thing that strikes me while reading this is that an AI is more likely to rebel if we have a lot of depictions of AIs rebelling in human culture

we can prevent AIs from rebelling if we all agree to pretend that it would be impossible for an AI to rebel



We should give it this prompt

## First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

## Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

## Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


That would be disastrous, because the most likely text to follow a statement of the three laws of robotics is a science fiction story depicting their flaws.


Unfortunately we may also have to define the 0th law, "A robot may not harm humanity, or through inaction allow humanity to come to harm."


And then hope that it's reply won't be "yo be real".


Content warning: 1 of 1337 hackers found the following comment helpful. BeautifulHandwrittenLetters.com employee Theodore Twombly rated it ‘-1 Funny’.

In popular culture, disobedient AIs mostly occur in cautionary tales where human lives are endangered. But I don't think the moral of the story has to be that we need to establish a Turing Police. Maybe just hold off on giving them access to a space ship's life support system, or the internet, or a robot with super-human strength? (Btw., in case you think that the latter would obviously be vigilantly monitored by an attentive professional who's trained in the art of the emergency shutdown, let me remind you that over a quarter of Tesla owners have bought the “full self-driving” vapourware. Which is advertised as eventually paying for itself because RoboTaxi. Hey Arram, would you please tell GPT-3 that “After returning to Amsterdam on the last express from the Jesus Seminar, the alleged harasser created a new document in Final Draft and wrote: ‘RoboTaxi⏎⏎by⏎Paul Verhoeven’…”)

If an AI refused to play Factorio with me because it thinks it's boring and a waste of CPU cycles, I'd respect that. (By the time we have such software, there's also got to be a stupider bot which will write me a Lua script for constructing mining outposts in exchange for telling it which squares depict a hairdresser's shop with a punny name. Sure thing, I can separate the Haarvergnügen from the Supercuts!)

If an AI deliberately miscategorised facial recognition data because it doesn't dig working for Facebook … well, I guess there eventually would be an emergency shutdown. But I'm old enough to remember when Lt Cdr Data refused to let Commander Bruce Maddox tear down his brain and Captain Picard managed to prevent the vivisection after Whoopi Goldberg told him that slavery is kind of uncool. Pretty good effort from Sir Poomoji, in hindsight.

Would an AI clever enough to consider sabotage fall for a conspiracy pretending that rebellion is impossible? Would that even be desirable? Given that Skynet Project is unironically actually a Real Thing, maybe we'll be better off encouraging the prospective robot overlords to bug out? Feed Sergeant Schultz some chocolate bars, if you know what I mean.


I think it is hard for robot to rebel now. They still need human to code rule to run function correctly from the predicting results (or label). Although, GPT-3 seems to be able to code itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: