Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes, I know. That's why I said I do not believe current AI has comprehension or planning abilities.

I think the motte and bailey argument where one warns extensively about how we're on the road to agi doom, pointing to gpt as evidence for it but then retreats to "I never said current AI is anywhere near agi" when pressed shows the lazyness of alignment discourse. Either its relevant to the models available at hand or you are speculating around the future without any grounding in reality. You don't get to do both.



I feel the exact opposite is true. To me it's lazy to say that AGI can't be a threat simply because current AI has not harmed us yet (which is not even true, but that's another thread).

I think you've misunderstood my arguments, so I'll step through them again:

1. The trajectory of how we got to current AI (from past AI) is terrifyingly steep. In the time since ChatGPT was released, many experts have shortened their predicted timelines for the arrival of AGI. In other words: AGI is coming soon.

2. Current AI is smart enough to demonstrate that alignment is not solved, not even close. Current AI says things to us that would be very scary coming from an AGI. In other words: Current AI is dangerous.

3. Alignment does not come automatically from increased capabilities. Maybe this is a huge leap, but I don't see any reason that making AI smarter will automatically give it values that are more aligned with out interests. In other words: Future AI will not be less dangerous than current AI without dramatic and unlikely effort.

None of these ideas contradict each other. Current AI is dangerous. AI is getting smarter faster than it is getting safer. Therefore, future AI will be extremely dangerous.


There is no body of substantial evidence to support the claim that generative pretrained transformers will lead to AGI in the near future.

"Current AI is dangerous" - I see zero evidence to suggest that this is the case for GPT

"AI is getting smarter faster than it is getting safer" - irrelevant because I do not believe that AI is unsafe currently

Therefore your conclusion does not follow.


What exactly would you consider substantial evidence that transformers lead to AGI, short of AGI itself?


Possibly some explanation of how you go from text completion to any reasonable definition of AGI.


Okay, what task do you think cannot be phrased as a text completion task?


I don't think AGI is going to happen anytime soon, but I think there's some mild danger in GPT at least ruining the internet and eliminating a few jobs. Plus mindf*king a few gullible souls, possibly into doing dumb, dangerous things.


Well there you go. You don't believe that an AI expressing dangerous ideas represents danger, and you don't believe that astronomical increases in AI abilities represent the advent of AGI. The latter opinion is... well, an opinion you're allowed to have. I don't think it makes sense, but I certainly can't prove otherwise. Literally every human on the planet - rather, all of humanity, only has speculation to go on here.

The former opinion is.. not a great take. First, ChatGPT isn't the only one out there. It's Bing's Sydney which is dehumanizing people and threatening them. Those are dangerous ideas. If a human or a certified AGI expressed those ideas, they would be problematic (see: every genocide in history). So for a non-AGI AI to express those ideas is worrying, even if it can't do act on them right now in a way that's directly harmful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: