Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And is it there yet? Does ChatGPT have its finger on the trigger?

I think everyone in the danger community is crying wolf before we've even left the house. That's just as dangerous. It's desensitizing everyone to the more plausible and immediate dangers.

The response to "AI will turn the world to paperclips" is "LOL"

The response to "AI could threaten jobs and may cause systems they're integrated into to behave unpredictably" is "yeah, we should be careful"



Of course it's not there yet. For once (?) we are having this discussion before the wolves are at the door.

And yes, there are more important things to worry about right now than the AIpocalypse. But that doesn't mean that thinking about what happens as (some) humans come to trust and rely on these systems isn't important.


You only get one chance to align a created super intelligence before Pandora's box is opened. You can't put it back in the box. There may be no chance to learn from mistakes made. With a technology this powerful, it's never too early to research and prepare for the potential existential risk. You scoff at the "paperclips" meme, but it illustrates a legitimate issue.

Now, a reasonable counterargument might be that this risk justifies a limited amount of attention and concern, relative to other problems and risks we are facing. That said, the problem and risk are real, and there may be no takebacks. Preparing for tail risks are what humans are worst at. I submit that all caution is warranted, for both economic uncertainty and "paperclips"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: