Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. — before turning around and declaring that it could be existentially dangerous? How can you have it both ways? If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer? Thus, shouldn’t we keep scaling AI as quickly as we can … for safety reasons? If, on the other hand, the problem is that GPT-4 is too smart, then why can’t you bring yourself to say so?

I think the flaw here is equating "smart" with "powerful".

Personally, I think generative AI is scary both when it gets things wrong and when it gets things right. If it was so stupid that it got things wrong all the time and no one cared to use it, then it would be powerless and non-threatening.

But once it crosses a threshold where it's right (or appears to be) often enough for people to find it compelling and use it all the time, then it has become an incredibly powerful force in the hands of millions whose consequences we don't understand. It appears to have crossed that threshold even though it still hilariously gets stuff wrong often.

Making it smarter doesn't walk it back across the threshold, it just makes it even more compelling. Maybe being right more often also makes it safer at an even greater rate, and is thus a net win for safety, but that's entirely unproven.



Yes, we need to think about ways to reduce power. Intelligence isn’t even well-defined for bots.

For most people, AI chat is currently a turn-based game [1] and we should try to keep it that way. Making it into an RTS game by running it faster in a loop could be very bad. Fortunately it’s too expensive to do much of that, for now.

So one idea is to keep it under human supervision. The way I would like AI tools to work is like single-stepping in a debugger, where a person gets a preview of whatever API call it might make before it does it. Already, Langchain and Bing’s automatic search and OpenAI’s plugins violate this principle. At least they’re slow.

AI chatbots will likely get faster. Having some legal minimums on price per query and on API response times could help keep AI mostly a turn-based game, rather than turning into something like robot trading on a stock exchange.

[1] https://skybrian.substack.com/p/ai-chats-are-turn-based-game...


I feel like many people who signed the statement did so not because they really agreed with it but because they want a pause on the churn, just as OP had colleagues who admitted as much just for their academic reasons. A lot of people don't really think it's smart but find it dangerous for other reasons, or they have issues with the blatant IP violation that is just being assumed to be okay and "fair use."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: