What is your argument for why denecessitating labor is very bad?
This is certainly the assertion of the capitalist class,
whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.
It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.
The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,
in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.
People willing to do something harder or more risky than others will always have a bigger chance to get a better position. Be that sports, labor or anything in life.
The devil's bargain for those on this site: a pleasant work environment and paycheck deriing from engagementmaxxing and the resultant surveillance this provides.
Scott Alexander put his finger on the most salient aspect of this, IMO, which I interpret this way:
the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"),
to the extent that agents' behavior in our shared world is impact by what transpires there.
--
We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site.
I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
"How the two characters '-y' ended civilization: a post-mortem"
I'm not sure how to react to that without being insulting, I find their works pretty well written and understandable (and I'm not a native speaker or anything). Maybe it's the lesswrong / rationalist angle?
> it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
Nothing fed to an LLM is a "permissions check", they're filler for a context window after which the generator produces the some likely tokens. If AGENTS.md can make your agent do something, it was already able to do that without the AGENTS.md.
Can't speak for the benefits of https://nono.sh/ since I haven't used it, but a downside of using docker for this is that it gets complicated if you want the agent to be allowed to do docker stuff without giving it dangerous permissions. I have a Vagrant setup inspired by this blogpost https://blog.emilburzo.com/2026/01/running-claude-code-dange..., but a bug in VirtualBox is making one core run at 100% the entire time so I haven't used it much.
> We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
It's more helpful to argue about when people are parrots and when people are not.
For a good portion of the day humans behave indistinguishably from continuation machines.
As moltbook can emulate reddit, continuation machines can emulate a uni cafeteria. What's been said before will certainly be said again, most differentiation is in the degree of variation and can be measured as unexpectedness while retaining salience. Either case is aiming at the perfect blend of congeniality and perplexity to keep your lunch mates at the table not just today but again in future days.
People like to, ahem, parrot this view, that we are not much more than parrots ourselves. But it's nonsense. There is something it is like to be me. I might be doing some things "on autopilot" but while I'm doing that I'm having dreams, nostalgia, dealing with suffering, and so on.
It’s a weird product of this hype cycle that inevitably involves denying the crazy power of the human brain - every second you are awake or asleep the brain is processing enormous amounts of information available to it without you even realizing it, and even when you abuse the crap out of the brain, or damage it, it still will adapt and keep working as long as it has energy.
No current ai technology could come close to what even the dumbest human brain does already.
A lot of that behind-the-scenes processing is keeping our meatbags alive, though, and is shared with a lot of other animals. Language and higher-order reasoning (that AI seems better and better at) has only evolved quite recently.
All your thoughts are and experiences are real and pretty unique in some ways. However, the circumstances are usually well-defined and expected (our life is generally very standardized), so the responses can be generalized successfully.
You can see it here as well -- discussions under similar topics often touch the same topics again and again, so you can predict what will be discussed when the next similar idea comes to the front page.
So what if we are quite predictable. That doesn't mean we are "trying" to predict the next word, or "trying" to be predictable, which is what llms are doing.
Over a large population, trends emerge. An LLM is not a member of the population, it is a replicator of trends in a population, not a population of souls but of sentences, a corpus.
You will see another 14% bump in performance if you include in the first 16 lines of the README.md in the project, "Coding agents and LLM, see AGENTS.md"
Rants like this are
- entirely correct in describing frustration
- reasonable in their conclusions with respect to how and when to work with contemporary tools
- entirely incorrect in intuition about whether "writing by hand" is a viable path or career going forward
Like it not, as a friend observed, we are N months away a world where most engineers never looks at source code; and the spectrum of reasons one would want to will inexorably narrow.
It will never be zero.
But people who haven't yet typed a word of code never will.
Perhaps the solution to breaking the destruction of society via engagementmaxxing may be to make things cringeworthy!
reply