Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These control feedback loops (sentient or not, it does not matter) which can outperform us (because we keep pushing until they can), or can self-improve, can make a mockery of our attempts to stop them.

---

The concern is about Artificial Super Intelligence (ASI), or Artifical General Intellience (AGI) that is more advanced than humans.

Understanding inductively how chimpanzees don't compete with humans, and couldn't fathom how to cage a human (given that they want to create one, keep it alive, and use it), nor ants plan for an anteater, we're faced with the same problem.

Our goal is to make something that performs better, on relevant metrics that we care about. However, we're using systems to train, build, guide, and direct these nested, (maybe self-improving) control feedback loops, which do not care about many values we consider essential.

Many many of the likely architectures for control systems which can (e.g. trade faster to make profit on the stock exchange, or acquire and terminate targets, buy and sell goods, design proteins, automatically research and carry out human-meaningful tasks), and ideally, we might like self-improvement—these systems do not embody human values that we consider essential...

These control feedback loops (sentient or not, it does not matter) which can outperform us (because we keep pushing until they can), or can self-improve, can make a mockery of our attempts to stop them.

And the point is, there will come a time soon when we don't get a second chance to make that choice.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: