Self driving cars aren't fully autonomous yet. They occasionally require human intervention to make decisions, which I would guess makes it difficult to steal these cars without being detected. They're constantly cloud connected and so they know where the car is at all times. I imagine these factors may deter theft of self driving cars for a while longer.
Repossessing a self driving car, on the other hand...
The plan when having a self driving car help you steal it would not be to have it self-drive from the owner's home in, say, San Francisco, to a chop shop in Tijuana where you gang awaits, thus requiring it to make a long trip without human intervention.
It would be have it self-drive from the owner's home to someplace nearby not associated with the the thieves to get it out of sight from the owner's security cameras and whatever other security cameras in the neighborhood might be watching where the gang can disable the cloud connection and then transport it by truck or driving it in manual mode to their chop shop.
> Way to cherry pick the one definition that fits your argument and ignore all the other definitions which kinda fit for what this model is doing.
There is no definition on the linked page that isn't self-referential or even more inapplicable in the context of transformer based LLMs?
* "an individual act or product of thinking": This is self-referential.
* "a developed intention or plan": The model doesn't have an intention or plan - it predicts the probability of text based on a user prompt mixed with corporate-inflicted bias.
* "the intellectual product or the organized views and principles of a period, place, group, or individual": GPTs are not periods, places, groups or individuals. I don't doubt that the input to the model is based on thought, but this is not the same as the model thinking. Maybe this is where your confusion originates from?
Bonus: I asked GPT4o "Can you think?" (sorry I won't pay for o1). It was trained to state itself that it can't think:
"I can process information, analyze patterns, and provide responses based on data and logical reasoning. However, I don't "think" in the human sense—I don't have consciousness, emotions, or subjective experiences. My abilities are limited to generating text, solving problems, and assisting with tasks based on the vast amount of information I've been trained on.
In essence, I simulate reasoning, but I don't have awareness or independent thought."
I would add that I strongly doubt that it is capable of even simulating "reasoning" as is claimed by the model "authors" (not sure if we can say they are authors since most of the model isn't their IP). And I can prove that the models up to 4o aren't generally able to solve problems.
The question really is whether a group of people is attempting to anthropomorphize a clever matrix processor to maximize hype and sales. You'll have to answer that one for yourself.
What does self referential have to do with anything? These LLMs have proven they can "talk about themselves".
> an individual act or product of thinking
Emphasis on "product of thinking". Though you'll probably get all upset by the use of the word "thinking". However, people have applied the word "thinking" to computers for decades. When a computer is busy or loading, they might say "it's thinking."
> a developed intention or plan
You could certainly ask this model to write up a plan for something.
> reasoning power
Whether you like it or not, these LLMs do have some limited ability to reason. Far from human level reasoning, and they VERY frequently make mistakes/hallucinations and misunderstand, but these models have proven they can reason about things they weren't specifically trained on. For example, I remember seeing one person made up a new programming language, never existed before, and they were able to discuss it with an LLM.
No, they're not conscious. No, they don't have minds. But we need to rethink what it means for something to be "intelligent", or what it means for something to "reason", that doesn't require a conscious mind.
For the record, I find LLM technology fascinating, but I also see how flawed it is, how over hyped it is, that it is mostly a stochastic parrot, and that currently it's greatest use is as a grand scale bullshit misinformation generator. I use chatgpt sparingly, only when I'm confident it may actually give me an accurate answer. I'm not here to praise chatbots or anything, but I also don't have a blind hatred for the technology, nor do I immediately reject everything labeled as "AI".
> What does self referential have to do with anything?
It means that the definition of "thought" from Webster as "an individual act or product of thinking" is referring to the word being defined (thought -> thinking) and thus is self-referential. I said in my prior response already that if you refer to the input of the model being a "product of thinking", then I agree, but that doesn't give the model an ability to think. It just means that its input has been thought up by humans.
> When a computer is busy or loading, they might say "it's thinking."
Which I hope was never meant to be a serious claim that a computer would really be thinking in those cases.
> You could certainly ask this model to write up a plan for something.
This is not the same thing as planning. Because it's an LLM, if you ask it to write up a plan, it will do its thing and predict the next series of words most probable based on its training corpus. This is not the same as actively planning something with an intention of achieving a goal. It's basically reciting plans that exist in its training set adapted to the prompt, which can look convincing to a certain degree if you are lucky.
> Whether you like it or not, these LLMs do have some limited ability to reason.
While this is an ongoing discussion, there are various papers that make good attempts at proving the opposite. If you think about it, LLMs (before the trick applied in the o1 model) cannot have any reasoning ability since the processing time for each token is constant. Whether adding more internal "reasoning" tokens is going to change anything about this, I am not sure anyone can say for sure at the moment since the model is not open to inspection, but I think there are many pointers suggesting it's rather improbable. The most prominent being the fact that LLMs come with a > 0 chance of the next word predicted being wrong, thus real reasoning is not possible since there is no way to reliably check for errors (hallucination). Did you ever get "I don't know." as a response from an LLM? May that be because it cannot reason and instead just predicts the next word based on probabilities inferred from the training corpus (which for obvious reasons doesn't include what the model doesn't "know" and reasoning would be required to infer the fact that it doesn't know something)?
> I'm not here to praise chatbots or anything, but I also don't have a blind hatred for the technology, nor do I immediately reject everything labeled as "AI".
I hope I didn't come across as having "blind hatred" for anything. I think it's important to understand what transformer based LLMs are actually capable of and what they are not. Anthropomorphizing technology is in my estimation a slippery slope. Calling an LLM a "being", "thinking" or "reasoning" are only some examples of what "sales optimizing" anthropomorphization could look like. This comes not only with the danger of you investing into the wrong thing, but also of making wrong decisions that could have significant consequences for your future career and life in general. Last but not least, it might be detrimental to the development of future useful AI (as in "improving our lives") since it may lead to deciders in politics drawing the wrong conclusions in terms of regulation and so on.
Fascinating stuff! It reminds me a little of chaos theory. The average person probably wouldn't expect to find structure in chaos, but that's exactly what we have.
I know the experiment you're referring to. It's been a long time since I've seen it, no idea how to find it now, but you're definitely misremembering some details. Dogs do have a sense of time, but in that experiment, it actually had to do with scent. As their owner was away, their scent would gradually dissipate throughout the day. At a certain point, the scent was weak enough that the dog knew it was about time for their owner to be home.
In the experiment, then they did everything they could to remove their owners scent from their home. The dog's owner came home at the usual time, but the dog wasn't expecting it this time because they had removed the owner's scent earlier, so the dog was clearly surprised and confused.
Dogs have a very strong sense of smell which we humans often fail to appreciate. It's not like dogs can smell their owner coming home from miles away, that's a little preposterous. But they can use their sense of smell in other ways, which are not so obvious to us, such as to maintain a sense of time.
Aha, interesting, I looked up some info about it and it seems that in another experiment the dogs also reacted when the owner went home at irregular times. Quite interesting:
There's a lot of people that have similar experiences though. You can check the comments on that video. Also the parent comment that started this discussion is an example.
Patents can be licensed, and automakers are already effectively at a patent stalemate, so any enforcement is unlikely.
Just like Microsoft's various patents on Linux haven't stopped companies from making Android phones, just resulted in some of them paying Microsoft money for patent licensing.
Acquisitions/mergers are one of the great evils of capitalism which only serve to consolidate power into mega corporations. Governments are far too permissive of them. The vast majority should be flat out rejected.
The sad truth is, most people simply don't care about anything that doesn't affect them. Yes, there are people who do care, like many of us here on Hacker News, but we are very much in the minority. And even among those who do care, very few are really willing to do anything about it, far from enough to bring about any meaningful change.
Repossessing a self driving car, on the other hand...