Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I ask, how. He is making reductive logical leaps that don't make sense. It's FUD.

His Time article addresses this, as does much of his other writing. It really stems from two key points:

1) The vast majority of possible superintelligences have utility functions that don't include humans. Mind space is large. So by default, we should assume that a superintelligence won't go out of its way to preserve anything we find valuable. And as Eliezer says, we're made of useful atoms.

2) By definition, it can think of things that we can't. So we should have no confidence in our ability to predict its limitations.

It's reasonable to challenge assumptions, but it's not reasonable to say this line of reasoning doesn't exist.



3) We probably don't know WHAT it's train of thought is -- what it's thinking -- so at that point have lost control of it. We will have no idea what it will so next, and be unprepared to counter anything it does. We literally then become another "AI" (the human race) against it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: