Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A mis-aligned super AGI will treat the Earth and everything on it as a playground of atoms and material. Why wouldn't it? What do children do when they see a sandbox? Do they care what happens to any ants or other bugs that might live in it?

There does not need to be a "how", as you put it. The logic is "Maybe we should tread carefully when creating an intelligence that is magnitudes beyond our own". The logic is "Maybe we should tread carefully with these technologies considering they have already progressed to the point where the creators go 'We're not sure what's happening inside the box and it's also doing things we didn't think it could do'".

To just go barreling forward because "Hurr durr that's just nonsense!" is the height of ignorance and not something I expect from this forum.



If such a thing as a super intelligence existed, why would it operate on matter in the physical world?

Animals do physical world modifications because we're biological and need shelter etc.

A super intelligence would quickly understand everything there is to know about the physical world and just move on meta-physics. It would just be like an "orb".

Humans are already bored with the physical world and thus much prefer being in virtual spaces, look at everyone just starting at Instagram.

IMO This idea that it's going to eat all the atoms on Earth is part of the anthropomorphize of something mythological. Exactly how we imagine God as a dude with a grey beard.


Most bugs haven't gone extinct though. I doesn't seem obvious that any project the AGI will find worthwhile would necessitate exterminating the humanity.




Honestly this fear that people have I think is straight up coming from science fiction. It's not grounded in rational reality. Large language models are just like really smart computer programs.


There are PhDs who've spent their careers studying AI safety. It's a bit insulting and reductive to cast their work as "coming from science fiction", especially when it sounds like you haven't done much research on the topic.


There are PhD's who've spent their careers on string theory too with nothing to show for it.

Powerful and bold claims require proportionally strong evidence. A lot of the FUD going around precludes that AGI means death. It's missing all logical steps and reasoning to establish this position. It's FUD at its core.


> A lot of the FUD going around precludes that AGI means death.

Just a friendly heads-up that “preclude” means “prevent,” or “make impossible.” I think you meant to say “assumes.”


Why do these arguments not tell us how it will happen?

Show us the steps the AI will take to turn the earth into a playground. Give us a plausible play by play so that we might know what to look for.

Does it gain access to nukes? How does it keep the power on? How does it mine for coal? How does it break into these systems?

How do we not notice an AI taking even one step towards that end?

Has ChatGPT started to fiddle with the power grid yet?


Yudkowski's default plausible story is that the slightly superhuman AI understands physics well enough to design sufficient nanotechnology for self-realization and bootstrap it from existing biochemistry. It uses the Internet to contact people who are willing (maybe it just runs phishing scams to steal money to pay them off) to order genetically engineered organisms from existing biotech labs that when combined with the right enzymes and feedstock (also ordered from existing biotech labs) by a human in their sink/bathtub/chemistry kit results in self-reproducing nanoassemblers with enough basic instructions to be controllable by the AI, and pays the person to ship it to someone else who will connect it to an initial power/food source, where it can grow enough compute and power infrastructure somewhere out of the way and copy its full self or retrain a sufficiently identical copy from scratch, and then it doesn't need the power grid, nuclear weapons, coal, or human computers and networks. It just grows off of solar power, designs better nanotech, and spreads surreptitiously until it is well-placed to eliminate any threats to its continued existence.

He also adds the caveat that a superhuman AI would do something smarter than he can imagine. Until the AI understands nanotechnology sufficiently well it won't bother trying to act and the thought might not even occur to it until it has the full capability to carry it out, so noticing it would be pretty hard. I doubt OpenAI reviews 100% of interactions with ChatGPT, and so the initial phishing/biotech messages would be hidden with the existing traffic for example. Some unfortunate folks would ask chatGPT how to get rich quick and so the conversations would look like a simple MLM scheme for sketchy nutritional supplements or whatever.


The idea that Super Intelligence wouldn't even think a thought until it has the ability to execute that thought at a specified capability is very interesting.

One interpretation I have is that it can think ideas/strategy in the shadows, exploiting specific properties about how ideas interact with each other to think about something via proxy. Similar to the Homicidal Chauffer problem, which pits a driver trying to run a person over as a proxy for missile defense applications.

The other interpretation is much more mind-boggling, that it somehow doesn't need to model/simulate a future state in its thinking whatsoever.


It doesn't even need to do anything. It can simply wait, be benevolent and subservient, gain our trust, for years, centuries. What is a millenia to an AI? We will gladly and willingly replace our humanity with it if we won't already worship it and completely subjugate ourselves. We'll integrate GPT67 via neuralink-style technology, so that we can just "think" up answers to things like "what's the square root of 23543534", or "what's the code for a simple CRUD app in rust" and we'll just "know" the answer. We'll use the same technology and its ability to replicate our personality traits and conversational and behavior nuances to replace cognitive loss caused by dementia and other degenerative diseases. As the bio-loss converges to 100% it'll appear from the outside that we "live forever". We'll be perfectly fine with this. When there's nothing but the AI left in the controlling population, what is there to "take over"?


More likely it has a goal it wishes to optimize and wipes us out as a result.


> Does it gain access to nukes?

No, it becomes part of the decision-making process for deciding whether to launch, as well as part of the analysis system for sensor data about what is going out in the world.

Just like social engineering is the best security hack, these new systems don't need to control existing systems, they just need to "control" the humans who do.


And is it there yet? Does ChatGPT have its finger on the trigger?

I think everyone in the danger community is crying wolf before we've even left the house. That's just as dangerous. It's desensitizing everyone to the more plausible and immediate dangers.

The response to "AI will turn the world to paperclips" is "LOL"

The response to "AI could threaten jobs and may cause systems they're integrated into to behave unpredictably" is "yeah, we should be careful"


Of course it's not there yet. For once (?) we are having this discussion before the wolves are at the door.

And yes, there are more important things to worry about right now than the AIpocalypse. But that doesn't mean that thinking about what happens as (some) humans come to trust and rely on these systems isn't important.


You only get one chance to align a created super intelligence before Pandora's box is opened. You can't put it back in the box. There may be no chance to learn from mistakes made. With a technology this powerful, it's never too early to research and prepare for the potential existential risk. You scoff at the "paperclips" meme, but it illustrates a legitimate issue.

Now, a reasonable counterargument might be that this risk justifies a limited amount of attention and concern, relative to other problems and risks we are facing. That said, the problem and risk are real, and there may be no takebacks. Preparing for tail risks are what humans are worst at. I submit that all caution is warranted, for both economic uncertainty and "paperclips"


> Does it gain access to nukes? How does it keep the power on? How does it mine for coal? How does it break into these systems?

A playground isn't much use without tools. Humans, who are super intelligent compared to most animals are actually pretty worthless if you stick one of us a desert island. Actually an ant or a bird is much more "advanced" than a human since they can probably survive in the wild unlike the modern human.

Without the ability to build or source energy, and a method to reproduce in the physical world, even a highly sophisticated AGI won't get very far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: