Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On one hand I hate to belabor this point, but on the other I think it's actually super important.

Both of things things are true:

1. The relationships between parameter weights are mysterious, non-evident, and we don't know precisely why it is so effective at token generation.

2. An agent built on top of a LLM cannot have any thought, intent, consideration, agenda, or idea that is not readable in plain english. Because all of those concepts involve state.



I'm not going to argue whether that's correct or not. In the end, adding state to a LLM is trivial. Bing chat has enough state to converse without forgetting the context. Google put an LLM on a physical robot, which has state even if narrowly understood as the position in space. Go further and you might realize that we have systems with part LLM, part other state (LLM + a stateful human on the other side of the chat).

So we have ever-more-powerful seemingly-intelligent LLMs, attached to state with no obvious limit to the growth of either. I don't see why in the extreme this shouldn't extrapolate to godlike intelligence, even with the state caveat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: