Hacker Newsnew | past | comments | ask | show | jobs | submit | asey's commentslogin


On the livestream (perhaps elsewhere?) you can watch the submissions and scores come in over time. The LLM steadily increased (and sometimes decreased) it's score over time though by the end did seem to hit a lacuna. You could even see it try out new strategies (with walls e.g.) which didn't appear until about half-way through the competition.


Could you give an example?


Those are ChatGPT models. The code-davinci-002 model is still available - they responded to community requests to keep it up.


Models like these don't see words as made of up letters but rather see the whole words (tokens) as a single entity. The result being they're not very good at creating novel (non-memorized) anagrams/palindromes and the like.


Are you sure that’s GPT-3? Doesn't match my experience of its outputs at all.


You can get this sort of output by asking GPT-3 to do a human's impression of an AI.


Can you give a verbatim example prompt? Because my understanding is that GPT-3 works by generating responses based on seed phrases, not from arbitrary instructions.


Your "seed phrase" can be any text you'd like, including text that asks the model a question.


Of course, though in fairness parent was answering grandparent's specific question (and accurately in my experience)


But where is the apple? That is, where is the emergent phenomenon of the apple?


The emergent phenomenon is a pattern in my brain, just as a running a computer program is an emergent phenomenon in a computer.

If the extra thing dualism adds is just behaviours of matter, how is that different from materialism?


Who is reading the Apple?


I'm not sure what you mean by reading.

We say that we can experience thinking about an Apple, or imagining an Apple. I see no reason why a computer, or other physically implemented AGI system, could not do that. I suspect the act of imagination is just generating, processing and transforming a computational model abstracting the thing being imagined.

I believe brains are physical objects, so therefore physical objects can imagine Apples.


The plane of imagination isn’t physical, else you could touch it.


I don't know what a plane is in this context.

You can't touch fourier transforms either, but a human brain or a microchip can compute them.


Sure you can, its done currently for Deep Brain Stimulation. What you're proposing is "The plane of Quake isn't physical", which is nonsense.

I can make the computer imagine Quake for me, then fiddle with its plane of imagination for some sweet wallhacks.


If you create a device that can insert objects into the plane of someone’s imagination, that would be awesome :)


That's a side effect of DBI today. It's a random and crude method today, but arguing that we we'll hit some ineffable wall that will prevent more fine-grained control is.. well let's just say that the gaps for gods grow ever smaller.


Who is it that fetched this comment from the internet for you?

Using vaguely-defined words to support a deist position is a time-honored tradition, but isn't particularly interesting or convincing.


There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.

A collection of neurons can build a model of the world that includes its experience of apples, and from that, dedicate some neurons to representing a particular instance of an apple. This model isn't the reality of "Apples", though, and is physically located in the brain.


“ There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.”

Sure there is, that’s what DeepMind showed us with how to find cats in images :)


That's exactly my point. DeepMind has an idea of what a cat is based on its experiences, just as you or I do. Each of our models are woefully incomplete, based on very limited sensory information. These models all disagree with each other and reality to various degrees.

There exist many things many humans have lumped together under a single label such as "cat". Those categories are all wrong, but sometimes they're useful. Machines can also get in on the fun, just as well as humans, as you point out. There's no magic there, humans aren't special.


The humans aren’t special bit really comes down to whether you believe in free will (which by any meaningful definition is quite special).


Free will is another one of those things that people love to trot out because it's so ill-defined. To cut through all the crap though, it's very simple: "free will" === "unpredictable behavior". This inherently means that it's observer-dependent.

This has the benefit that it empirically fits how people think about it. Nobody thinks a rock has free will. Some people think animals have free will. Lots of people think humans have free will. This is everybody trying to smush a vague concept into the very simple, intuitive definition above.

Which is all to say that free will is about as relevant to any conversation as say astrology is: not one bit.


As much as a cloud of atoms doth protest that free will is irrelevant, reality has a way of not caring :)


Funny, I'd say reality has a way of existing despite all of the comforting woo people like to make up about it.


I think that would be a question of memory rather than having a conscious experience though I think it raises the question of whether consciousness can exist without some mechanism for memory even if only short term.


Hi Ben, could I ask the size of your team managing these clusters and development efforts?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: