Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Carmack makes four points—some of which I agree with—that are unfortunately disturbing when taken in totality:

a) We’ll eventually have universal remote workers that are cloud-deployable.

b) That we’ll have something on the level of a toddler first, at which point we can deploy an army of engineers, developmental psychologists, and scientists to study it.

c) The source code for AGI will be a few tens of thousands of line of code.

d) Has good reason to believe that an AGI would not require computing power approaching the scale of the human brain.

I wholeheartedly agree with c) and d). However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one. Sure, a hard-takeoff scenario could very well have little stopping it. However, I think that misses the forest for the trees:

Nothing says AGI is going to be one specific architecture. There’s likely many different viable architectures that are vastly different in capability and safety. If the bar ends up being as low as c) and d), what’s stopping a random person from intentionally or unintentionally ending human civilization?

Even if we’re spared a direct nightmare scenario, you still have a high probability for what might end up being complete chaos—we’ve already seen a very tiny sliver of that dynamic in the past year.

I think there’s a high probability that either side of a) won’t exist, because neither the cloud as we know it nor the need for remote workers will be present once we’re at that level of technology. For better or worse.

So what to do?

I think open development of advanced AI and AGI is lunacy. Despite Nick Bostrom’s position that an AGI arms race is inherently dangerous, I believe that it is less dangerous than humanity collectively advancing the technology to the point that anyone can end or even control everything—let alone certain well-resourced hostile regimes with terrible human rights track records that’ve openly stated their ambitions towards AI domination. When the lead time from state of the art to public availability is a matter of months, that affords pretty much zero time to react let alone assure safety or control.

At the rate we’re going, by the time people in the free world with sufficient power to put together an effort on the scale and secrecy of the Manhattan Project come to their senses, it’ll be too late.

Were such a project to exist, I think that an admirable goal might be to simply stabilize the situation via way of prohibiting creation of further AGI for a time. Unlike nuclear weapons, AGI has the potential to effectively walk back the invention of itself.

However, achieving that end both quickly and safely is no small feat. It would amount to creation of a deity. Yet, that path seems more desirable than the alternatives outlined above-such a deity coming into existence either by accident or by malice.

This is why I’ve never agreed with people who hold the position that AGI safety should only be studied once we figure out AGI-that to me is also lunacy. Given the implications, we should be putting armies of philosophers and scientists alike on the task. Even if they collectively figure out one or two tiny pieces of the puzzle, that alone could be enough to drastically alter the course of human civilization for the better given the stakes.

I suppose it’s ironic that humanity’s only salvation from the technology it has created may in fact be technology—certainly not a unique scenario in our history. I fear our collective fate has been left to nothing more than pure chance. Poetic I suppose, given our origins.



> However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one.

Yes. Wondering why we're not trying to instead create artificial-cockroach-brain (or just artificial Hydra?). Perhaps that's more on the Biology side of the equation? But then again, that may be the biggest surprise of all to Carmack, that the actual AGI breakthroughs come from biologists and not computer nerds.


> what’s stopping a random person from intentionally or unintentionally ending human civilization?

If AGI is really an intelligent agent, our random supervillain would have to do what any real-life villain would need to do: convince his minions of his plan using persuasion or money. I don't think the overall danger would increase at all.

If the AGI is something less than a human, then what are you worried about?


Intelligent agents need not mirror human psychology or emotions. The creation of something extremely powerful that doesn’t think like we do is a very real possibility.

In human beings, what we consider normal is actually a very fragile and delicate balance. Changes in chemical operation of the brain have outsized effects on emotion, perception, and even sanity.

With A[G]I, I think it’s helpful to think of code or architectural changes as analogous in some respects to chemical changes. In other words, if all it takes to spin up an AGI is 30,000 lines of code, then I bet rendering the thing psychotic intentionally or unintentionally would just take a few lines somewhere.

Agents capable of recursive self-improvement at silicon speeds that can easily be rendered psychotic or malevolent even by accident, is not something that I think the public should have access to, let alone anyone.

If it’s less than human, it can still have superhuman capability. The paperclip maximizer is the classic example of a tool AI run amok. Whether it counts as AGI is up for debate. Is tool AI a path to AGI? I think it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: