I'm in the middle of a suit against them for harassment received from surveillance employees. Can't say much more than that (active case), but it's extremely common. The methods used once surveillance is discovered are deliberately made to be so unusual that you're either terrified into silence or are afraid no one would believe you, so that the operation can continue.
I dunno. They applied COINTELPRO tactics from the 1960s to take down the Sicilian Mafia in the 1970s once Hoover was out of the picture. One of the best ways for cops to get some credibility is to bust somebody who deserves it.
The convergence is only valid if the distance ladder is accurate. There are a variety of deductive bottlenecks in the distance ladder which could implicate the whole current distance model. Standard candles and redshift measurements are calibrated together, for example. If either is off then the whole current ladder could be invalid.
Probably some combination of quantizing down from original fp16 weights and changes to the system prompt used for chat. Both can cause degraded quality, the former more than the latter.
I attended lectures of his at IU. My impression is that he has some sort of chip on his shoulder. He's been committed to the ideological position that AI is Hard and Won't Happen for decades, and having models be so competent at translation and other tasks (he used to argue that google translate was essentially impossible) has probably thrown him off balance.
Current AI doesn't give us what Hofstadter has said he wants from AI.
If you try ChatGPT on any of his creative analogy problems it fails completely. Given abc->abd, what does xyz go to? ChatGPT says xyd until you further prompt it.
It is a little disappointing that useful language is translation is possible with that little understanding of content. Most sentences a person might choose to say don't mean very much.
I'm depressed by that too. There's something about the facility and the emptiness of these models that suggests our ability to think is a lame trick.
I think "competent" is the wrong adjective, though. "Capable" is more accurate and the distinction there is the issue.
I'm sure she did use this exact example because Hofstadter and others have been using it since the mid 80s, yes. I don't think he invented the test. I have seen the chatbot fail at it and I've seen it succeed, more or less at random.
You are, however, missing the point.
If you hold a small dog over water it will make swimming motions. It didn't learn those motions. They're encoded into the genes that make its neural matrix and they go off even if it isn't actually in the water.
The ability of ChatGPT to perform (what we consider to be) extremely sophisticated linguistic operations while also being dumb as a sack of hammers about simple questions is upsetting.
A system that sometimes fails the abc->abd test but will easily rewrite the lyrics to "Baby Got Back" in the style of Shakespeare and then Bukowski and then translate it into passable Japanese suggests that many of the activities that we consider creative and useful are actually shallow, stupid, and easy imitations of things we've heard from other people.
I have no urge to disprove ChatGPT. It's extraordinary and powerful. I had just hoped that doing interesting things linguistically was a good measure of internal understanding.
With the way LLMs work we know there is no stateful model in there, no cognitive loop, no understanding, no eventual emergent consciousness. The weights are precalculated and it does not reconsider them. No aspect of the system is based on any possible notion of an "I" that is feeling and looking at itself. There is no memory space set up to store that selfhood and it forgets everything between conversations. The slate is wiped clean.
Yet here it is playing word games beautifully. Failing very basic mathematical questions and writing kick-ass rock lyrics.
As a proponent of emergent strong AI I'd assumed that these abilities depended on self awareness and that any AI capable of doing these neat tricks would be conscious. Instead it appears that self-awareness is an optional curse, an evolutionary side-effect that allows us to suffer and be disappointed in ourselves but is not necessary for creating poetry or visual art. The parts of T.S. Eliot we care about were just a dog wiggling his legs over something he thought was a pond.
There's a sadness there. The point is definitely not "look how dumb the robot is, hurr-de-durr."
No, you are. You are making a lot of hay out of a simple, dumb, trivial technical flaw, which is easily fixed, and reading deep lessons and 'sadness' into it. It's not interesting at all (which is part of why I'm angry I have to waste so much of my life explaining to people why their character-based gotcha is actually uninformative and irrelevant).
It no more tells us anything about deep learning or human intelligence than would waving a piece of paper 'ABC is to XYZ as etc' in front of a blind person and them promptly failing the test, even if they are a skilled song writer. Duh! Of course they fail the test! They're blind. Their eyes don't let them see the individual characters, anymore than a BPE tokenization lets a LLM see individual characters. You would be a fool to conclude that 'blind humans are not actually intelligent, and yet, he wrote such good lyrics, isn't it sad that the parts of T.S. Eliot we cared about were just some sort of wiggling dog leg' on the basis of failure due to such an obvious mechanical perceptual flaw. Maybe blind people are less intelligent on average due to their handicap, sure, maybe human intelligence is in fact a sad dog leg wiggling around - both of those are entirely possible - but you need a different measurement and it should have been obvious to you and everyone else before you wasted your time waving the paper. The waving paper told you nothing of interest whatsoever.
So the refugees who put their crypto on a thumbdrive so they could get their money out of ukraine as the infrastructure was bombed out of existence are scum of the earth? There are absolutely 0 legitimate use cases?