What one considers fast or slow may vary, but the general rule is something like the following.
- very fast? run it all the time (shell prompt drawing, if you want, like direnv)
- fast? run it in a pre-commit hook
- a bit slow? run it in a pre-push book
- really slow? run it in CI, during code review, etc.
Fwiw: I also rewrite history often-ish but it's never that fast for me because I have commit signing turned on and that requires verifying my presence to a USB smartcard on each commit. For me, it's okay if a commit takes a second or two. As it creeps up beyond 3 or 4 seconds, I become increasingly annoyed. If a commit took a minute I would consider that broken, and if I were expected to tolerate that or it were forced on me, I'd be furious.
I generally expect an individual pre-commit hook to under ~200ms (hopefully less), which seems reasonable to me. Some of the ones we have are kinda slow (more than 1s) and maybe should be moved to pre-push.
Since you seem especially sensitive to that latency, here's what I'd propose if we worked together:
If you own a repo, let's make all the hooks pre-push instead of pre-commit. On my repos, I like many hooks to run pre-commit. But since the hooks we use are managed by a system that honors local overrides via devenv.local.nix, let's make sure that's in .gitignore everywhere. When I'm iterating in your codebases and I want more automated feedback, I'll move more hooks to pre-commit, and when you're working in mine you can move all my hooks to pre-push (or just disable them while tidying up a branch).
The standard way for this with current tools is to have the formatter/linter make the changes but exit with a non-zero status, failing the hook. Then the person reviews the changes, stages, and commits. (That's what our setup currently has `tofu fmt` do.)
But if you don't want to have hooks modify code, in a case like this you can also just use `tofu validate`. Our setup does `tflint` and `tofu validate` for this purpose, neither of which modifies the code.
This is also, of course, a reasonable place to have people use `tofu plan`. It you want bad code to fail as quickly as possible, you can do:
tflint -> tfsec -> tofu validate -> tofu plan
That'll catch everything Terraform will let you catch before deploy time— most of it very quickly— without modifying any code.
> make the changes but exit with a non-zero status
That's reasonable. My personal (and that of my team at the time) take was that I was willing to let formatting - and only formatting - be auto-merged into the commit, since that isn't going to impact logic. For anything else, though, I would definitely want to let submitter review the changes.
I think of Git hooks as a useful guiderail for myself that I share with others. They sometimes help me reduce the length of iteration cycles for workflows (which I generally don't like) where one typically commits before running/deploying (e.g., most of my team's Terraform repos).
> you can't force folks to run them
I think it's useful to be able to disable things like this when debugging or reconfiguring them. I sometimes disable the ones I've set up for myself, then reenable them later.
> Why not just call a shell script directly?
Because it's manual; I have to remember to do it each time. But there's no reason not to have a script you can invoke in other ways, if that's something you want.
> How would you use these with a CI/CD platform?
The thing that sets up your environment also installs the git hooks configuration, and/or you can have it invoke specific hooks via CLI.
> My main critique is that it mixes tool installation with linting
If you use a tool like this via Devenv instead of using its built-in mechanisms for installing tools:
- you can add a linter without putting it on your path
- you can put a linter on your path without enabling any git hooks for it
- if you are already using a linter in a git hook, adding it to your environment will get you the exact same version as you're already using for your git hook at no additional storage cost
- if you are already using a linter at the CLI, and you add a git hook for it, your hook will run with the exact same version that you are already using at the CLI at no additional storage cost
- your configuration interface is isomorphic to the upstream one, so
- any custom hooks you're already using can be added without modification beyond converting from one format to another;
- any online documentation you find about adding a custom hook not distributed with the upstream framework still applies;
- and you can configure new, custom, or modified hooks with a familiar interface.
- any hook you write as a script or with substantial logic can also be plugged into a built-in task runner for use outside git hook contexts, where
- you can express dependency relationships between tasks, so
- every task runs concurrently unless dependency relations mandate otherwise.
Which imo solves that problem pretty well. My team uses that kind of setup in all of our projects.
> The interface isn't built with parallelism in mind, it's sort of bolted on but not really something I think could work well in practice.
I'm curious about what this means. Could you expand on it?
You might be interested in something like `treefmt`, which is designed around a standard interface for configuring formatters to run in parallel, but doesn't do any package management or installation at all:
(That might address both of the issues I've replied to you about so far, to some extent.)
> It also uses a bunch of rando open source repos which is a supply chain nightmare even with pinning.
If the linters you're running are open source, isn't this what you're ultimately doing anyway? Nix gives a bit more control here, but I'm not sure whether it directly addresses your concern.
> I am working on a competing tool
Oh, dammit. Only after writing all of this out did I realize you're the author of mise. I'm sure you're well aware of Nix and Devenv. :)
Because I think your critiques make sense and might be shared by others, I'll post this comment anyway. I'm still interested in your replies, and your opinion of having some environment management tool plug into prek and supplant its software installation mechanisms.
And because I think mise likely gets many of these things right in the same way Devenv does, and reasonable people could prefer either, I'll include links to both Devenv and mise below:
It's idiomatic in pre-commit to leverage the public plugins which all do their own tool installation. If you're not using them, and also not using it for tool installation, I'm not sure why you'd not be using the much simpler lefthook.
If you look at hk you will understand what I'm talking about in regards to parallelism. hk uses read/write locks and other techniques like processing --diff output to safely run multiple fixers in parallel without them stomping on each other. treefmt doesn't support this either, it won't let you run multiple fixers on the same file at the same time like hk will.
> If the linters you're running are open source, isn't this what you're ultimately doing anyway?
You have to trust the linters. You don't also need to trust the plugin authors. In hk we don't really use "plugins" but the equivalent is first-party so you're not extending trust to extra parties beyond me and the lint vendors.
> If you look at hk you will understand what I'm talking about in regards to parallelism. hk uses read/write locks and other techniques like processing --diff output to safely run multiple fixers in parallel without them stomping on each other. treefmt doesn't support this either, it won't let you run multiple fixers on the same file at the same time like hk will.
That sounds pretty cool! I will definitely take a closer look.
Since no one makes Android devices with hardware keyboards anymore, I almost never use this kind of software anymore. After getting burned by a couple of Kickstarter phones hampered by half-baked software and total lack of updates, the only thing I could rationally conclude is that Android as a productivity platform is a lost cause.
When Android was new, I very frequently used Termux and ConnectBot with my first few Motorola Droid phones. For a brief moment, I had a working phone with a great physical design only held back by an outdated chipset and being locked to Planet Computers' abandonware. I could touch-type at 80 WPM on an easily pocketable device! Termux shone there.
So many things about Android were not just more exciting in terms of potential when it was new, but actively better: wider variety of hardware, widely unlocked bootloaders, no remote attestation, etc. Termux sadly feels like a painful reminder of that to me.
I have a tiny but very comfy bluetooth keyboard, though 90% of the time I use a keyboard with android I'm using my tablet (and it's easy to forget it's not a laptop).
Centralized automatic updates, like those of a Linux distribution or Microsoft's Windows Updates, involve giving permission to way fewer parties permission to download and run (unsigned, in the case of Notepad++ this time) code on your machine with high privileges.
And for more modern software distribution mechanisms (e.g., Nix, Guix, Flatpak), centralized package updates may not actually run any vendor code with high privileges at all.
The norm for proprietary software updates on Windows is indeed a free-for-all of every publisher downloading and running code with admin rights, and it is indeed a terrible way to operate. Avoiding that kind of madness doesn't necessarily mean running lots of old, vulnerable software.
> I am still surprised most Linux Distros haven't changed their package managers to allow for selling of proprietary solutions directly, fully opt-in by default of course.
That's essentially being done with Flatpak.
Linux is largely still built on the old (and indeed, outdated) Unix trust model. The system itself is assumed to be trusted, and the primary security boundaries on the system are drawn between users. Since Linux package managers actually install and manage the base system as well as end-user software, anything the package manager installs is treated as part of "the distribution", and thus trusted. It's not a good idea to use such a thing to install proprietary, third-party software. The curation and vetting of the distro maintainers is actually vital here, and when you add a third party repo, you're giving it a lot of trust. At the same time, why would distro maintainers give free labor to integrate proprietary software? Most are not super interested in that, and even if they are, they don't generally have the rights necessary to redistribute, let alone modify, proprietary software. On the other hand, those third-party developers and publishers don't want to master and manage a half-dozen different packaging formats, and various other packaging ecosystem differences that vary across distros.
Flatpak is positioned to solve all of these problems, and it's no secret that enabling (relatively) responsible use of proprietary software is one of the goals. It enabled distributing a small number of large, common runtimes of which different versions can safely coexist on the same system, addressing fragmentation. To reduce the amount of trust given to installed apps, it separates what it installs from the base system, and offers sandboxing to help limit the permissions granted to an app that still runs under the OS user of the person using it. And it supports third-party repos that publishers can run themselves.
I'm not currently a daily Flatpak user, so idk how much the current reality lines up with that goal, but that's where the movement towards this is on the Linux desktop today.
reply