I dunno. I think the main thing is that LLMs just don't have any ability to reason, at all. An AI that can reason might one day use an LLM to put its thoughts into words, but asking the LLM to write code for a program that it’s never seen before is putting the cart before the horse.
> Just mimicking some syntax you found on the web (which is ultimately what the AI is doing) will not get you very far at all.
I wonder if this syntax-driven phenomenon might also explain why many natural intelligences bounce off of lisp?