Whether it has inner state is highly relevant to my claim, which was that the only state a LLM has (aside from its parameters) is transparent and readable in English. Which the context is.
You're the one who put state in the conversation. State is part of the whole, and not the whole. It's not enough to understand state if you want to understand why they work. I feel like you're trying to muddy the waters by redirecting the problem to be about the state - it isn't.
On one hand I hate to belabor this point, but on the other I think it's actually super important.
Both of things things are true:
1. The relationships between parameter weights are mysterious, non-evident, and we don't know precisely why it is so effective at token generation.
2. An agent built on top of a LLM cannot have any thought, intent, consideration, agenda, or idea that is not readable in plain english. Because all of those concepts involve state.
I'm not going to argue whether that's correct or not. In the end, adding state to a LLM is trivial. Bing chat has enough state to converse without forgetting the context. Google put an LLM on a physical robot, which has state even if narrowly understood as the position in space. Go further and you might realize that we have systems with part LLM, part other state (LLM + a stateful human on the other side of the chat).
So we have ever-more-powerful seemingly-intelligent LLMs, attached to state with no obvious limit to the growth of either. I don't see why in the extreme this shouldn't extrapolate to godlike intelligence, even with the state caveat.