The thing you want has a kind of academic jargon name (coeffects algebra with graded/indexed monads for discharge) but is very intuitive, and it can do useful and complete attestation without compromising anyone credentials (in the limit case because everyone chooses what proxy to run).
I compile nix derivations to well-posed effect/coeffect/graded monad algebra so I can do real bill of materials and build on an action cache engine maintained by professionals, but that's mostly for long-tail stuff.
These days with a la carte access to all of the container ecosystem primitives as nice, ergonomic, orthogonal operations I don't really see the value in nixpkgs. Don't really see the value in a container registry either: a correctly attested content addressable store with some DNS abbreviations is 100 lines of code because mostly it's git and an S3 shim.
The category error at the heart of nixpkgs is that the environment in which software is compiled need resemble the environment in which it executes. Silly stuff. So whether you're a patchelf --rpath ... Person or an unshare --bind-mount Enjoyer (isomorphic), just remember, in 2026 the guy with the daemon that runs as root does not want you to have nice things.
They can all write lean4 now, don't accept numbers that don't carry proofs. The CAS I use for builds has a coeffect discharge cert in the attestation header, couple lines of code. Graded monads are a snap in CIC.
There are some numbers that are uncomputable in lean. You can do things to approximate them in lean however, those approximates may still be wrong. Leans uncomputable namespace is very interesting.
As with many things the secret sauce of AI-assisted software engineering is mathematics, skepticism, and hard work.
The first thing anyone should do is immediately understand that they are in a finite-sum adversarial relationship with all of their vendors. If you're getting repeatably and unambiguously food outcomes with Claude Code you're either in the bandit arm where high twitter follower count people go or you're counting cards in a twelve deck shoe and it'll be 24 soon. Switch to opencode today, the little thing in the corner is a clock, and a prominent clock is more or less proof you're not in a casino.
There is another key observation that took me a long time to internalize, which is that the loss surface of formalism is not convex: most of us have tried to lift the droids of of the untyped lambda calculus and into System F, and this does not as a general thing go well. But a droid in the untyped lambda calculus will with 100% likelihood eventually Superfund your codebase, no exceptions.
The droids are wildly "happy" for lack of a better term in CIC but the sweet spot is System F Omega. A web app that's not in Halogen is an unforced error now: pure vibecoders can do lastingly valuable work in PureScript, and they can't in React, which is irretrievably broken under any sane effects algebra.
So AI coding is kind of a mean reversion in a sense. Knuth and Lamport and Hoare and Djikstra had an LLM: it was an army of guys in short-sleeved dress shirts with half a page of fanfold system prompt on every page. That is automatable now, but there's a straightforward algorithm for remaining relevant: crank the ambition up until Opus 4.5 is whiffing.
Computer Scientist is still a high impact job. The cast of Office Space is redundant. But we kinda knew that the first time we saw Office Space.
Personally I'm working harder than ever on harder problems than ever with pretty extreme AI assist hygiene: you spend all your time on hard problems with serious impact. The bueden of understanding the code and being able to quit vim has gone up but down, and mathematics is absorbing computer programming.
The popular narrative about Claude Code 10x ing JavaScript projects is maskirovska
I mostly write lean4 now and emit proof-carrying System F Omega via rfl. It's the right level of abstraction when the droids have been pinned to theory laden symbolisms. It's also just pleasant to use.
Anthropic might be the first gigantic company to destroy itself by bootstrapping a capability race it definitionally cannot win.
They've been leading in AI coding outcomes (not exactly the Olympics) via being first on a few things, notably a serious commitment to both high cost/high effort post train (curated code and a fucking gigaton of Scale/Surge/etc) and basically the entire non-retired elite ex-Meta engagement org banditing the fuck out of "best pair programmer ever!"
But Opus is good enough to build the tools you need to not need Opus much. Once you escape the Clade Code Casino, you speed run to agent as stochastic omega tactic fast. I'll be AI sovereign in January with better outcomes.
The big AI establishment says AI will change everything. Except their job and status. Everything but that. gl
I mean I'm running TensorRT-LLM on a basket of spot vendors at NVFP4 with auction convexity math and Clickhouse Keeper and custom passthrough.
I need more tokens not less because the available weight models aren't quite as strong, but I roofline sm_100 and sm_120 for a living: I get a factor of 2 on the spot arb, a factor of 2 on the utilization, and a factor of 4-16 on the quant.
It would need to implement a few dozen ioctls, correctly stub the kernel module in guests, do a probably memory-safe assignment of GPU memory to guest, and then ultimately map that info to BAR/MSI-X semantics of a real kernel module. You could get VFIO pretty fast for a full start by correctly masking LTR bits, but to truly make it free you'd need a user space io_uring broker that had survived hundreds of millions of adversarial fuzz runs because there's only so fast the firmware blob can run even if it's preloaded into initramfs.
Serious work, detail intense, but not so different in design to e.g. Carmack's Trinity engine. Doable.
https://imgur.com/a/Ztyw5x5
reply