Gemini is weird and I’m not suggesting it’s due to ingenuity on Google’s behalf. This might be the result of genuine limitations of the current architecture (or by design? Read on).
Here’s what I’ve noticed with Gemini 3. Often it repeats itself with 80% of the same text with the last couple of lines being different. And I mean it repeat these paragraphs 5-6 times. Truly bizarre.
From all that almost GPT-2 quality text, it’s able to derive genuinely useful insights and coherent explanations in the final text. Some kind of multi-head parallel processing + voting mechanism? Evolution of MoE? I don’t know. But in a way this fits the mental model of massive processing at Google where a single super cluster can drive 9,000+ connected TPUs. Anyone who knows more, care to share? Genuinely interested.
I get this too. I’ve had it apologize for repeating something verbatim, then proceed to do it again word for word despite my asking for clarification or pointing out that it’s incorrect and not actually searching the web like I requested. Over and over and over until some bit flips and it finally actually gives the information requested.
The example that stands out most clearly is that I asked it how to turn the fog lights on in my rental vehicle by giving it the exact year, make, model, etc. For 6-8 replies in a row it gave the exact same answer about it being a (non-existent) button on the dash. Then finally something clicked, it searched the Internet, and accurately said that it was a twistable collar midway down the turn signal stalk.
Here’s what I’ve noticed with Gemini 3. Often it repeats itself with 80% of the same text with the last couple of lines being different. And I mean it repeat these paragraphs 5-6 times. Truly bizarre.
From all that almost GPT-2 quality text, it’s able to derive genuinely useful insights and coherent explanations in the final text. Some kind of multi-head parallel processing + voting mechanism? Evolution of MoE? I don’t know. But in a way this fits the mental model of massive processing at Google where a single super cluster can drive 9,000+ connected TPUs. Anyone who knows more, care to share? Genuinely interested.