We don't _really_ know what the physical manifestations of meaning and form are in the brain... they're just concepts we invented.
If anything, GPT-3 is suggesting that either:
1. Tasks which were previously thought to require meaning actually turn out only to require form.
2. Meaning and form are more related than previously thought.
Both are interesting findings imo, but 2 would be huge, especially if it suggests how the brain might work. Could meaning be an emergent phenomenon of form?
It seems like Bender & Koller's argument can just as easily prove that humans don't have representations of meaning either. Our brains get input from multiple "languages"---conventional natural languages as well as the neural codes that represent our sensory inputs. All we have access to is the form of these inputs, and so we can never learn the underlying "meaning".
Hmm I think I see what you're saying. Reality is experienced through electrical signals, arguably we're just learning those inputs. But then I (as always I can only speak for myself, everyone else could be a p-zombie for all I know!) also have a qualitative experience of trees, and words, and an experience of meaning and understanding.
If you look too deeply it quickly gets philosophical.
> Reality is experienced through electrical signals, arguably we're just learning those inputs.
That's exactly what we're doing. And we're never given the "answer sheet" to figure out whether we understood the platonic, capital T Truth, or whether we just learned a spurious correlation. We just keep getting more of those inputs. Which is why it seems to me that an unsupervised sequence prediction model like GPT-3 is the only sort that could ever give rise to something akin to human consciousness.
The big differentiator seems to be that with a pure text sequence model, inputs go in, but the outputs don't have any control over future inputs. It isn't structured to have anything like agency, just passive observation and prediction. But a useful "understanding" in a human sense is related to what can be done with that understanding to enact change in the environment. I don't know how you would teach it that without giving it a Reddit account and setting it loose.
> But then I (as always I can only speak for myself, everyone else could be a p-zombie for all I know!) also have a qualitative experience of trees, and words, and an experience of meaning and understanding.
If you look too deeply it quickly gets philosophical.
I'm not so sure I have those things. I'm glad you do. That's one reason I'm never going to do ketamine.
The paper makes reference to the Symbol Grounding Problem, but I have not found the SGP's distinction between form and meaning to be completely convincing without some evidence of a physical, observable process.
At the end when you look long enough it seems to call into question the very nature of consciousness.
We don't _really_ know what the physical manifestations of meaning and form are in the brain... they're just concepts we invented.
If anything, GPT-3 is suggesting that either:
1. Tasks which were previously thought to require meaning actually turn out only to require form.
2. Meaning and form are more related than previously thought.
Both are interesting findings imo, but 2 would be huge, especially if it suggests how the brain might work. Could meaning be an emergent phenomenon of form?