It's not my experience that there are missing pieces as compared to anything else.
Nobody can write an exhaustive tome and explore every feature, use, problem, and pitfall of Python, for example. Every text on the topic will omit something.
It's hardly a criticism. I don't want exhaustive.
The llm taught me what I asked it to teach me. That's what I hope it will do, not try to caution me about everything I could do wrong with a language. That list might be infinite.
I'd gently point out we're 4 questions into "what about if you went about it stupidly and actually learned nothing?"
It's entirely possible they learned nothing and they're missing huge parts.
But we're sort of at the point where in order to ignore their self-reported experience, we're asking philosophical questions that amount to "how can you know you know if you don't know what you don't know and definitely don't know everything?"
More existentialism than interlocution.
If we decide our interlocutor can't be relied upon, what is discussion?
Would we have the same question if they said they did it from a book?
If they did do it from a book, how would we know if the book they read was missing something that we thought was crucial?
I was attempting to imply that with high-quality literature, it is often reviewed by humans who have some sort of knowledge about a particular topic or are willing to cross reference it with existing literature. The reader often does this as well.
For low-effort literature, this is often not the case, and can lead to things like https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect where a trained observer can point out that something is wrong, but an untrained observer cannot perceive what is incorrect.
IMO, this is adjacent to what human agents interacting with language models experience often. It isn't wrong about everything, but the nuance is enough to introduce some poor underlying thought patterns while learning.
That's easy. It's due to a psychological concept called: transfer of learning [0].
Perhaps the most famous example of this is Warren Buffet. For years Buffet missed out on returns from the tech industry [1] because he avoided investing in tech company stocks due to Berkshire's long standing philosophy to never invest in companies whose business model he doesn't understand.
His light bulb moment came when he used his understanding of a business he understood really well i.e. their furniture business [3] to value Apple as a consumer company rather than as a tech company leading to a $1bn position in Apple in 2016 [2].
You are right and that's my point. To me it just feels like that too many people think LLMs are the holy grail for learning. No, you still have to study a lot. Yes, it can be easier than it was.
Nobody can write an exhaustive tome and explore every feature, use, problem, and pitfall of Python, for example. Every text on the topic will omit something.
It's hardly a criticism. I don't want exhaustive.
The llm taught me what I asked it to teach me. That's what I hope it will do, not try to caution me about everything I could do wrong with a language. That list might be infinite.