On the livestream (perhaps elsewhere?) you can watch the submissions and scores come in over time. The LLM steadily increased (and sometimes decreased) it's score over time though by the end did seem to hit a lacuna. You could even see it try out new strategies (with walls e.g.) which didn't appear until about half-way through the competition.
Models like these don't see words as made of up letters but rather see the whole words (tokens) as a single entity. The result being they're not very good at creating novel (non-memorized) anagrams/palindromes and the like.
Can you give a verbatim example prompt? Because my understanding is that GPT-3 works by generating responses based on seed phrases, not from arbitrary instructions.
We say that we can experience thinking about an Apple, or imagining an Apple. I see no reason why a computer, or other physically implemented AGI system, could not do that. I suspect the act of imagination is just generating, processing and transforming a computational model abstracting the thing being imagined.
I believe brains are physical objects, so therefore physical objects can imagine Apples.
That's a side effect of DBI today. It's a random and crude method today, but arguing that we we'll hit some ineffable wall that will prevent more fine-grained control is.. well let's just say that the gaps for gods grow ever smaller.
There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.
A collection of neurons can build a model of the world that includes its experience of apples, and from that, dedicate some neurons to representing a particular instance of an apple. This model isn't the reality of "Apples", though, and is physically located in the brain.
That's exactly my point. DeepMind has an idea of what a cat is based on its experiences, just as you or I do. Each of our models are woefully incomplete, based on very limited sensory information. These models all disagree with each other and reality to various degrees.
There exist many things many humans have lumped together under a single label such as "cat". Those categories are all wrong, but sometimes they're useful. Machines can also get in on the fun, just as well as humans, as you point out. There's no magic there, humans aren't special.
Free will is another one of those things that people love to trot out because it's so ill-defined. To cut through all the crap though, it's very simple: "free will" === "unpredictable behavior". This inherently means that it's observer-dependent.
This has the benefit that it empirically fits how people think about it. Nobody thinks a rock has free will. Some people think animals have free will. Lots of people think humans have free will. This is everybody trying to smush a vague concept into the very simple, intuitive definition above.
Which is all to say that free will is about as relevant to any conversation as say astrology is: not one bit.
I think that would be a question of memory rather than having a conscious experience though I think it raises the question of whether consciousness can exist without some mechanism for memory even if only short term.