1 chatgpt query is a little misleading though. Let's see an 8 hour full bore claude code agent session. Or maybe running 3 agents for several hours a day.
It also doesn't include the amortized cost of training the models, as far as I can tell. I believe I heard that training the models took more energy than total queries against that model, but I could be mistaken.
I believe training currently costs significantly more than inference to all the current vendors, so I'd be surprised if it doesn't also use more power.
And by the look of it, that'll be the norm pretty much forever - unless something fundamental about how models can be trained/updated, an "older" model loses value as it's knowledge becomes out of date, even if we no longer get improvements from other sources or techniques.
But other things likely change based on "lifetimes" and usage patterns too - e.g. a large battery for an electric car may have a higher upfront energy cost in manufacturing than a small ICE + fuel tank, but presumably there's a mileage that the improved per-mile efficiency overcomes that, and then continues to gain with each additional mile.
Quick everyone, create decoy repos for them to vibe in. When the feature doesn't appear "oh feature gate system has an incident try tomorrow". Even better make the decoy repo have an insufferable pipeline that always breaks and get them in a loop trying to fix it. An adveserial "red team" LLM can keep it broken! But tantalizingly with different problems so progress is felt.
Doom it's easy. Better the ZMachine with an interpreter
based on DFrotz, or another port. Then a game can even run under a Game Boy.
For a similar case, check Eforth+Subleq. If this guy can emulate subleq CPU under a GPU (something like 5 lines under C for the implementation, the rest it's C headers and the file opening function), it can run Eforth and maybe Sokoban.
reply