Thanks for clarifying what you meant by "undestanding". I think it's a bit too broad of a criterion, for example in the video above Morris is using "Tnetennba" in a new context but there's no way to tell whether he really knows what it means (the joke is not that he doesn't know, but exactly that using "Tnetennba" in the kind of sentence he uses it does not elucidate the meaning of the word).
>> So if you can explain what a words means, you can give a definition in terms of other words.
Suppose I give you the following mapping between symbols: a -> p, c -> r, d -> k, e -> j.
Now suppose I give you the phrase: "a a a c a d e e a c"
I gave you a definition of each symbol in the phrase in terms of other symbols. What does the phrase mean? Alternatively, what do the symbols, themselves, mean?
Obviously, you can't say. Being able to give the definition of a word in terms of other words presupposes you understand the other words, also. So, just because a language model is using a word doesn't mean it knows its meaning- only that it uses the word.
There is a bit a grey area, which is why I was careful say "the model learned/understood/knowns something" in my earlier comments.
However, You seem to be making the Chinese Room argument. If you define meaning such that either no computer program could possibly "understand" meaning or it is unverifiable if it does, I don't think it makes much sense to have a discussion if GPT-3 does. Is there a test that a model could pass that would convince you that it "knows" meaning according to your definition?
The "Chinese room argument" says that external behaviour cannot be proof of consciousness, intelligence, understanding etc. What my comment above says is that you can't explain a word A by means of another word, B, unless you already know the meaning of B.
My comment is relevant to the question of whether GPT-3 has "understanding" or not, because in order for GPT-3 to understand the meaning of a word A in terms of a meaning of a word B, it needs to already know the meaning of the word B. However, this is what we wish to know, whether GPT-3 knows the meaning of any word. Observing that GPT-3 can use a new word in the place of a different word doesn't tell us whether it knows the meaning of the original word.
As of yet, no, there is no formal test that would convince me or a majority of reserachers in AI that a model "knows", "understands" or anything like that. The reason is not that I am too stubborn, say. Rather there simply aren't such tests available yet. One reason for that is that we don't, well, understand what it means to "understand". We don't have a commonly accepted formal definition of such ability. Without that, we can't really design tests to prove that some system has it.
The take away is that it will be a long time before we can know for sure that a system is displaying intelligence, understanding, etc. This may be unsatisfying- but the alternative is to design meaningless tests that prove not what we are trying to prove and proclaim the goal proven if the tests pass. This does not go well with the purpose of scientific endeavour, which is to acquire knowledge- not pass tests and make big proclamations about winning this or that competition.
In short, I'm not saying that computers can't have understanding, or that we can't know if they do. I'm saying that right now, these things are not possible, with current technology.
>> So if you can explain what a words means, you can give a definition in terms of other words.
Suppose I give you the following mapping between symbols: a -> p, c -> r, d -> k, e -> j.
Now suppose I give you the phrase: "a a a c a d e e a c"
I gave you a definition of each symbol in the phrase in terms of other symbols. What does the phrase mean? Alternatively, what do the symbols, themselves, mean?
Obviously, you can't say. Being able to give the definition of a word in terms of other words presupposes you understand the other words, also. So, just because a language model is using a word doesn't mean it knows its meaning- only that it uses the word.