Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is not a valid analogy. In 1939 there was at least a clear theory of nuclear reactions backed up by extensive experiments. At that point building a weapon was mostly a hard engineering problem. But we have no comprehensive theory of cognition, or even anything that legitimately meets the criteria to be labeled a hypothesis. There is zero evidence to indicate that LLMs are on a path to AGI.

If the people working in this field have some actual hard data then I'll be happy to take a look at it. But if all they have is an opinion then let's go with mine instead.

If you want me to take this issue seriously then show me an AGI roughly on the level of a mouse or whatever. And by AGI I mean something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time). By that measure we're not even at the insect level.



> something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time).

DNN RL agents can do that. Of course you'll wave it away as "not general" or "mouse is obviously better". But you won't be able to define that precisely, just the same you're not able to prove ChatGPT "doesn't really reason".

PS. Oh nevermind, I've read your other comments below.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: