I still don't have IPv6 at home in the middle of San Francisco with Google Fiber / Webpass and have to egress through an HE.net tunnel like it's 2002 again
Traditionally I've seen these adapters primarily used to pass binaries for other architectures to QEMU and similar.
Years ago on FreeBSD I created a "Volkswagen mode" by using the similar `imgact_binmisc` kernel module to register a handler for binaries with the system's native ELF headers. It took a bit of hacking to make it all work with the native architecture, but when it was done, the handler would simply execute the binary, drop its return code, and return 0 instead - effectively making the system think that every command was "successful"
The system failed to boot when I finally got it all working (which was expected) but it was a fun adventure to do something so pointless and silly.
It would be a similarly clever place to maintain persistence and transparently inject bytecode or do other rude things on FreeBSD as well
Yup, using this approach it's possible to build/use aarch64 containers on an x86 machine. This technique means that a much smaller set of operations are being emulated (doesn't have to emulate the entire kernel etc)
For something I was building, it enabled me to get a full aarch64 compilation done, with a native toolkit, without having to run a full emulation layer. The time savings of doing it this way vs full emulation were huge. Off the top of my head, emulated it was taking over an hour to do the full build, whereas within a container it was only about 10-15 minutes.
I have a GitHub action that uses an OAuth token to provision a new key and store it in our secrets manager as part of the workflow that provisions systems - the new systems then pull the ephemeral key to onboard themselves as they come up
It can get especially interesting when you do things like have your GitHub runners onboard themselves to Tailscale - at that point you can pretty much fully-provision isolated systems directly from GitHub Actions if you want
Gell-Mann amnesia [1] is an interesting thing to think about here. But IMO the real problem is that editing for most journalism websites is done very very poorly, if at all.
Skimming through the code (particularly from past issues and PRs) highlights a number of things that look sketchy to me at first glance (in a coding practices way, not in a malicious way) - my gut feeling is that someone smarter than me going through much of this with a fine-toothed-comb would likely find something exploitable.
reply