Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a bit a grey area, which is why I was careful say "the model learned/understood/knowns something" in my earlier comments.

However, You seem to be making the Chinese Room argument. If you define meaning such that either no computer program could possibly "understand" meaning or it is unverifiable if it does, I don't think it makes much sense to have a discussion if GPT-3 does. Is there a test that a model could pass that would convince you that it "knows" meaning according to your definition?



The "Chinese room argument" says that external behaviour cannot be proof of consciousness, intelligence, understanding etc. What my comment above says is that you can't explain a word A by means of another word, B, unless you already know the meaning of B.

My comment is relevant to the question of whether GPT-3 has "understanding" or not, because in order for GPT-3 to understand the meaning of a word A in terms of a meaning of a word B, it needs to already know the meaning of the word B. However, this is what we wish to know, whether GPT-3 knows the meaning of any word. Observing that GPT-3 can use a new word in the place of a different word doesn't tell us whether it knows the meaning of the original word.

As of yet, no, there is no formal test that would convince me or a majority of reserachers in AI that a model "knows", "understands" or anything like that. The reason is not that I am too stubborn, say. Rather there simply aren't such tests available yet. One reason for that is that we don't, well, understand what it means to "understand". We don't have a commonly accepted formal definition of such ability. Without that, we can't really design tests to prove that some system has it.

The take away is that it will be a long time before we can know for sure that a system is displaying intelligence, understanding, etc. This may be unsatisfying- but the alternative is to design meaningless tests that prove not what we are trying to prove and proclaim the goal proven if the tests pass. This does not go well with the purpose of scientific endeavour, which is to acquire knowledge- not pass tests and make big proclamations about winning this or that competition.

In short, I'm not saying that computers can't have understanding, or that we can't know if they do. I'm saying that right now, these things are not possible, with current technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: