It would be a mistake to say that the output from GPT3 lacks coherent meaning. It's not that the output is gibberish, it's that it's too easy to mistake it for a human's work. This means that it's easy to mistake it for something that was created with understanding and intention, when in fact the author was nothing more than a random number generator. The same risk exists for copilot. [--GPT3]
Well, take it up with GPT3 since it wrote that reply. :P
Though I don't fully disagree with it, though 'nothing more' is a bit too strong. The author of a GPT3 written comment like the one here where the prompt was pretty much just the thread is pretty much just the RNG. The language model makes the random choice draw from the distribution of plausible texts, and the RNG picks the output.
GPT3 could have written your comment-- if only it drew the right random numbers.
What RNG? It definitely doesn't randomly pick words. If the comment I responded to was written by a bot (is that legal? Can I report that?) then it's indistinguishable from a human written comment.
GPT3 works in compressed representation with symbols that are (sometimes) smaller than complete words but larger than letters. It takes a set of symbols as a context and generates a probability distribution function for the next symbol. Then a random number generator is used to sample from that distribution, and the process is repeated with the selected output added to the context. So its output is random but not uniformly random.
Exclusively selecting the most likely symbol produces pathological behavior outside of extremely short output.
What caused GPT3 to output its comment rather than yours is a product of its random choices. There is a set of choices it could have made which would have caused it to output your comment. You can see this property employed by the GPT2 text compressor: https://bellard.org/libnc/gpt2tc.html to compress text it just writes down the choices, using an entropy coder to represent likely choices with fewer bits.
I assume copilot is the same general structure as GPT-- just trained on different data.
And yes, the comment you responded to was written entirely by GPT3 (with some number of retries and trims). As it said-- it's "easy to mistake it for a human's work". :) There is nothing illegal about it, but I suppose HN would prefer that there be enough human supervision of bot comments such that they're limited to contexts where they are funny/insightful. :P