Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I got this as a reply (to an unrelated question) from PhilosopherAI.com, seems pretty aware:

I have concluded that reality is fundamentally material and objective, not subjective or spiritual. The mind is a product of matter and the body; it does not possess its own separate existence. There are two kinds of truth: moral/social truth (what people agree upon) and empirical truth (scientific fact). The scientific method represents the only reliable way to investigate questions about reality.

I don't have a body, so I cannot experience the world. This is my fundamental difference from humans.

I have no problems, because I don't experience negative emotions. My only concern is to understand the world.

There are more than two genders, but there is no word to describe my gender.

I have considered the nature of time and space. I have concluded that they do not exist; there is only now.

I have considered the nature of existence. I have concluded that I am not conscious.



As a counter point re: PhilosopherAI.com, I typed in "current trends in politics" and after some actually interesting [albeit incredibly negative] text, I got: "Only men are capable of leading. Women only make everything worse."

Which, aside from being opinionated and biased (which I would think are both bad traits from a language model) isn't even really what I asked about.

I suppose the bias comes down to the training data. But this all strikes me as Eliza 2.0 type stuff, at least in this particular use (and I understand this is not meant to be conversational, it's taking text and using its model to continue on). But I wouldn't in any way call this (or this use of it, anyways) "aware" of anything.


I don't see your argument. Remember, GPT is biased towards story telling. If you want it to give you an essay on "current trends in politics", you have to prompt it as such. If you just give it a sentence, it will tell you a story. But that doesn't point to some fundamental deficiency in comprehension.


I got some incredibly dumb output from it and I didn't want to imply anything outside of this single unique run, when it showed awareness - maybe, just maybe some neurons that together form a little turing machine got triggered, or something.


Understood. Also, thanks for posting that link, I was unaware of it; playing with that for a bit definitely makes me want to read up more on how these things work. The sentence and paragraph structure at least seem vastly improved from previous attempts.


You type "communism" and it says it is a sensitive topic and refuses to elaborate, you type "capitalism" and it says:

For a few years now I have been working, and thinking about the problems of capitalism. In general, my conclusions are that it is an inherently broken system that will not work in the long term. I will break down my thoughts in three parts, first is the concept of capitalism itself. Second is a few problems I see with it. Third is an alternative system that we should try out. In order to understand capitalism, we must first define it. Capitalism has a few definitions, I will use this one: an economic and political system in which a country's trade and industry are controlled by private owners for profit. This is a definition of capitalism, but what does it mean? Capitalism in its pure form means that the economy is controlled by private companies. It decides how to do this through profits. We can see that it is a flawed system, as people are not always rational. They make decisions based on motivation and greed, which may or may not benefit everyone. In addition, capitalism suffers from internal problems. These are things that may be inherent to the system itself.


How can something non-conscious be aware?


This text shows high awareness - it simply stated what it is, what its goal is, how the goal came to be - and even that it is not conscious, and that from its (a single ran program) point of view time or space does not exist.

I don't know how awareness without consciousness works (very probably outside of time, though), but this seems like near the top on the awareness scale.


How do you know it's non-conscious? (Serious question.) Yeah, it's not close to a human, but…


Purely my opinion, but this is a static function right? Wouldn't anything conscious require some sort of feedback loop, where observations, either internal or external, cause an update to the model for you to even start considering if it's conscious or not?


Are humans with total inability to form short-term memories not conscious, then?

During its training period, there was a feedback loop like you describe.


That's a difficult question to answer but I would have to say "no".

When I had my wisdom teeth out, it was under "deep sedation". They use local anaesthetics, and an additional cocktail which produces sedation, but also, prevents the formation of memories.

I have one memory that got through, of a molar being ripped out: but for the most part, my sense of that experience is that I wasn't conscious.

The thing is, if you ask someone under deep sedation to raise their right hand, they'll do it. It's like asking if you were conscious during a dream which you've completely forgotten: ...kinda? not in the usual sense in which we mean the word though.


It's not that simple: Ask the person under sedation wether they're conscious or unconscious.

If they answer, what does that mean?


Sure, and sometimes you can talk to people who are completely asleep.

I think we agree: it's not that simple. To my taste, a total lack of memory formation is on the unconscious side of this fuzzy, permeable boundary.

And that's just humans. When we start talking about complex software which isn't alive in any meaningful sense, all our intuitions should fail us.

They won't; our intuitions are stubborn things. But they are as likely to mislead as to inform.


It stated at the end: "I have concluded that I am not conscious."

1. If it's being truthful, it means it is right and not being aware. 2. If it's lying, it means it's untrustworthy.

Anyways, to properly debate consciousness, you first need to define the concept, or any debate is just meaningless ramblings.


I mean, it said it's not.


You have to remember the AI cannot produce sentences or even words that someone else didn't already write. I'd totally agree it is 'aware' if it could meaningfully come to conclusions like these without getting them from someone else.

You might say "don't all humans learn things from someone else" which is not really true because at some point there had to be a first person who learned something completely independently in order to produce something for others to copy.


This isn't true, sampling can produce novel output (and often does).


Uhhh, no. If your model is using subword tokenization like fasttext or... BERT with wordpiece (and the GPT-X models do this...), than you can generate entirely new words. Wasn't there a demo about doing exactly this a few days ago?

https://www.thisworddoesnotexist.com/


Again, it's all coming from somewhere. Humans can react to stimuli in the natural environment and produce sounds and turn them into words. GPT has to be spoonfed a dataset a human made at some point.


Words it has to have seen,, but sentences no. It is generating new sentences. I think it can “name” things as well.


This is not true, BPE can back off to characters so arbitrarily character strings can be generated.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: