Hacker Newsnew | past | comments | ask | show | jobs | submit | wmf's commentslogin

Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)

> Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)

Nick Bostrom (who wrote the paper this thread is about) published "Superintelligence: Paths, Dangers, Strategies" back in 2014, over 10 years before "If Anyone Builds It, Everyone Dies" was released and the possibility of AI doom was a major factor in that book.

I'm sure people talked about "AI doom" even before then, but a lot of the concerns people have about AI alignment (and the reasons why AI might kill us all, not because its evil, but because not killing us is a lower priority than other tasks it may want to accomplish) come from "Superintelligence". Google for "The Paperclip Maximizer" to get the gist of his scenario.

"Superintelligence" just flew a bit more under the public zeigeist radar than "If Anyone Builds It, Everyone Dies" did because back when it was published the idea that we would see anything remotely like AGI in our lifetimes seemed very remote, whereas now it is a bit less so.


When you put it that way, it sounds much easier to wipe out ~90% of humanity than to cure all diseases. This could create a "valley of doom" where the downsides of AI exceed the upsides.

That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again.

Everyone dies. And if your lifespan is 1400 years, you won't live for nearly 1400 years. OTOH, people with a 1400 year life expectancy are likely to be extremely risk averse in re anything that could conceivably threaten their lives ... and this would have consequences in re blackmail, kidnapping, muggings, capital punishment, and other societal matters.

Companies developing AI don't worry about this issue so why should we?

They know the truth. Current ai is a bit useful for some things. The rest is hype.

Azure? OpenShift? "I don't think about you at all." — Matt Garman probably

you might not but a lot of very big enterprises use openshift on azure.

I haven't watched The Thirteenth Floor in a while. The kids today don't even know about it.

Since I don't work for AWS I'm allowed to say that at the scale of millions/billions of microVMs you're better off running them on bare metal instances to avoid the overhead of nested virtualization.

If I remember correctly, Firecracker VMs don’t have the same security guarantees as EC2 instances. I think I remember that AWS doesn’t put multiple accounts lambdas either on the same bare metal server or VM. I can’t remember which

I used to work for AWS and I’m allowed to say the same thing. ;-)

Putting aside interconnection costs, when electricity is auctioned increased demand can increase wholesale prices for everyone.

Financing.

"Releasing a new version of Windows that just covers new Arm PCs is another signal of Microsoft’s commitment to Arm processors and the Arm version of Windows"

Real commitment would be to have a single build that runs on x86 and ARM. This special build sounds like a step backwards and a sign of incompetence.


They did the same with Win XP: different builds for 32 and 64 X86 versions.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: