edit: it's a glib joke, but we should think more about what makes a 'mind'. I highly recommend people check out the Children of Time series (sci-fi novels) as an enjoyable way to do that.
But the blur continues to linger on, and has a much wider impact than you might suspect. You see, it is not only that the question “Can machines think?” is regularly raised; we can —and should— deal with that by pointing out that it is just as relevant as the equally burning question “Can submarines swim?”
A more serious byproduct of the tendency to talk about machines in anthropomorphic terms is the companion phenomenon of talking about people in mechanistic terminology. The critical reading of articles about computer-assisted learning —excuse me: CAL for the intimi— leaves you no option: in the eyes of their authors, the educational process is simply reduced to a caricature, something like the building up of conditional reflexes. For those educationists, Pavlov’s dog adequately captures the essence of Mankind —while I can assure you, from intimate observations, that it only captures a minute fraction of what is involved in being a dog—.
If by flying we mean traverse from point A to point B without touching the ground... And by thinking we mean to conjure up a set of concepts (thoughts) that can be exported to a standardized lossy format (explaining oneself) and satisfy a set of requirements (conditions from a question)... then I'd be careful to say that, yes, LLMs can think... Even if it's by a different method than we do ourselves.
It would be interesting to take the same analogy to the feelings domain... but I'm afraid that coffeeshops are not open at the moment.
Imagine you interact with a program that can answer any question you have, explains itself well, seems perfectly intelligent. But then you get to read the source code and it's just petabytes of if/else statements with every question you asked and every other one you can imagine. A practically infinite list. But nothing more complex than that.
Interesting: I thought that Edsger Dijkstra's reference to the submarines were that of "The threats to computing science" ( https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E... ), earlier, of 1984 - where he mentioned it in the context of the initial uncertainty of the scope of Computer Science, with reference to Turing's question "Can Machines Think?":
> The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim
But that next speech warns against anthropomorphism.
It remains that we sometimes metaphors are useful. Only, we have to be normally wary not to jump outside the intersection (between described and description).
> I highly recommend people check out the Children of Time series
I can't second this recommendation enough. I just finished that book last week, and it is easily one of my favorites. Tons of little delightful epiphanies scattered throughout, and—as difficult as it must be to convey an alien mode of cognition and sensory processing—Adrian Tchaikovsky managed to do it well.
I like this question a lot. I'd say the answer is that submarines swim quite well, with good speed, amazing range of depth, and excellent directional stability. They are, however, poor swimmers when it comes to agility (quick changes in speed or direction).
The label "swimming" or "mind" is just a label, what matters is how things compare along any metric we can evaluate.
Not an hour ago I was remarking to my roommate that airplanes are "better" at flying than bumblebees, but the bumblebee does things no machine can, and that I see AI versus human intelligence similarly. Yours is pithier though!
That feels very on point, these are tools. It's fun to play the thought experiment of being able to ask a tool (GPT) what it thinks or feels, but isnt real consciousness is as undisprovable for a hammer as it is for another human being? Imho we need to focus on what these things can do to improve human lives and not get too distracted anthropomorphizing them, even though its super fun.
The author is supposedly[1] a computer scientist; he should really have pushed back on using the term "mind" here. But I guess it was too tempting to resist the click-bait impulse.
There could be a number of philological analyses brought to 'mind', but maybe one is already fitting.
Our fellow Croon in this page tells us the following remarkable exchange with an LLM chatbot: "What weighs more, one pound of feathers or two pounds of gold? // They weigh the same, as they both weigh one pound".
Now: the mind is the place where contents happen and are remembered (in latin, "mens", "memor", "maneo" - the permanence - but also "moneo", which is "having you remember to have you think", and "monstro", which is "showing you to have you remember"). If the LLM construct structurally does not remember that "_two_" pounds of something in the example above, that is the ad-mon-ition (ex said "moneo").
...But also: in the same family is Sanskrit "mantu", or "the advisor". Now, for something to properly be said to be capable of advising you, there are constraints...
edit: it's a glib joke, but we should think more about what makes a 'mind'. I highly recommend people check out the Children of Time series (sci-fi novels) as an enjoyable way to do that.