Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a bit like saying that 1939 Einstein wasn't an expert in nuclear bombs. Sure, it didn't exist, so he wasn't an expert on them, but he was an expert on the thing that led to it and when we said it was possible, sensible people listened.

A lot of people working on LLMs say that they believe there is a path to AGI. I'm very skeptical of claims that there's zero evidence in support of their views. I know some of these people and while they might be wrong, they're not stupid, grifters, malicious, or otherwise off-their-rockers.

What would you consider to be evidence that these (or some other technology) could be a path to be a serious physical threat? It's only meaningful for there to be "zero evidence" if there's something that could work as evidence. What is it?



That is not a valid analogy. In 1939 there was at least a clear theory of nuclear reactions backed up by extensive experiments. At that point building a weapon was mostly a hard engineering problem. But we have no comprehensive theory of cognition, or even anything that legitimately meets the criteria to be labeled a hypothesis. There is zero evidence to indicate that LLMs are on a path to AGI.

If the people working in this field have some actual hard data then I'll be happy to take a look at it. But if all they have is an opinion then let's go with mine instead.

If you want me to take this issue seriously then show me an AGI roughly on the level of a mouse or whatever. And by AGI I mean something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time). By that measure we're not even at the insect level.


> something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time).

DNN RL agents can do that. Of course you'll wave it away as "not general" or "mouse is obviously better". But you won't be able to define that precisely, just the same you're not able to prove ChatGPT "doesn't really reason".

PS. Oh nevermind, I've read your other comments below.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: