Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a bogus argument. There's a lot we don't understand about LLMs, yet we built them.


We built something we don’t understand by trial and error. Evolution took a few billion years getting to intelligence, so I guess we’re a few sprints away at least


Evolution took millions of years to reach the minimal thing that can read, write and solve leetcode problems, so by similar logic we're at least a few million years in...


Except it learned by direct examples and none of these models can solve even basic logic that isn’t in training data somehow.

Us having the ability doesn’t mean we’re close to reproducing ourselves, either


>Except it learned by direct examples

What do you mean by "it learned"? From my perspective evolution is nothing more than trail and error on a big timescale. In a way we are much more than evolution because we are a conscious intelligence that is controlling the trail and error resulting in shortening the timescale substantially.

Take robots that can walk for example. Instead of starting at a fish that slowly over thousands or even million of years moves to land and grows limbs over many generations we can just add legs which we have already tested in simulated software at x10 or more the speed of real time.

AI(G) potential or possibilities should not be measured by natures scale.


Have you tried giving it basic logic that isn't in its training data?

I have. gpt-3.5-instruct required a lot of prompting to keep it on track. Sonnet 4 got it in one.

Terrence Tao, the most prominent mathematician alive, says he's been getting LLM assistance with his research. I would need about a decade of training to be able to do any math in a day that Tao can't do in his head in less than 5 seconds.

LLMs are suffer from terrible, uh, dementia-like distraction, but they can definitely do logic.


There's a lot _I_ don't understand about LLMs, but I strongly question whether there is a lot that the best experts in the field don't understand about LLMs.


Oh, boy, are you in for a surprise.

Our theories of how LLMs learn and work look a lot more like biology than math. Including how vague and noncommittal they are because biology is _hard_.


this is an argument against LLM cognition. we don't know anything about the human brain, and we don't know anything about LLMs, so they must be similar? I don't follow


It's an argument that we don't need to understand LLMs to build them. But mostly it's just a statement of fact to promote correct understanding of our world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: