Hacker Newsnew | past | comments | ask | show | jobs | submit | alexjurkiewicz's commentslogin

Developer goodwill. And it probably cost a song.

The blog post says, many times, not to use Gastown. It makes fun of the tool's inconsistent branding and describes a lot of jankiness.

This tool is dangerous, largely untested, and yet may be of interest if you are already doing similar things in production.


(2024)

My favourite part of these tools is the zany use of numbered file descriptors. `keypair` outputs the public key on fd 5 and secret key on fd 9. But signing reads the secret key on fd 8, while verification reads the public key on fd 4! Why aren't they the same?? I have to read the manpage every time.


I'm curious, what do you actually use it for?

I'd have otherwise guessed that this tool mainly exists just to test lib25519. Personally I'd only ever want a library, or some higher-level tool. A CLI tool that just does raw signing feels like a weird (and footgun-shaped) middle ground.


This mostly exists to test lib25519 and ostensibly to build systems with shell scripts (though: few people would do that). It is a weird and footgun-shaped middle ground.


It's why no one has succeeded in replacing GPG: you need a lot of systems to work in order to have an actual viable one, the ability to spit out signatures from keys is required but not sufficient.


GPG is pervasive for the same reason git is pervasive: network effects. There are plenty of better alternatives.


Such as? I need an alternative which supports commutative trust relationships of some sort which are revocable.


You (knowingly?) picked the one counter example, lol. Web of trust is the one application of PGP/GPG for which there isn’t a product ready replacement tool to point towards. GPG is built around web of trust, but this is generally believed to have been a very, very bad idea and the source of innumerable security problems for nearly every application that has tried to make use of it. The GPG replacements I would point to are purpose-built for specific domains and eschew web of trust:

https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/

That said, you might find what you are looking for in the Rebooting Web of Trust project, and the various decentralized identity (DID) implementations that have come out of it:

https://www.weboftrust.info/


No I picked the case I'm dealing with most commonly: which is establishing trust. X509 certs will also do this.

I have numerous criticisms of the GPG system but it's not a solution to just not implement any solution at all: I.e. I need revocation lists, I need intermediate keys, I need the ability to establish alternate chains of trust or promote a chain to trusted. Some of this is very hard to do with x509 even or not will supported.


Trust meaning who you should do business with? Whose advice you should take?

Rather than “trust” you mean something very specific: whether a key was issued by an entity, or attested to from a set of authorities. The “web of trust” model that PGP/GPG supports is not the ideal means of implementing this.


Keybase or any of the tools inspired by keybase (foks.pub etc)


Isn’t keybase to GPG what github is to git?

> I'm curious, what do you actually use it for?

FTA:

> These tools allow lib25519 to be easily used from shell scripts.

I've never used ed25519-cli, but not having to use a library is nice for someone who isn't a programmer.


The Venn diagram of "not a programmer" and "can safely use Ed25519" is two non-overlapping circles.


"this app needs me to generate a key and point to it in config" is plenty of overlap


If you just want a raw ed25519 private key then `head -c32 /dev/urandom` does the job. But usually you want a DER/PEM wrapper or similar, which the openssl cli tools handle nicely.


I don't consider myself a programmer and I can use Ed25519 safely. I do however understand computing fairly well.


I consider myself a programmer and ed25519-understander, but the idea of using it directly within a shell script terrifies me.


Simply combine this tool with `openssl enc` and your shell script is as secure as any shell script could be


Someone writing shell scripts is a programmer, for better or worse.


That's such a user-hostile design decision. I can't fathom what justifies it (other than kinky taste).

Makes your commands unreadable without a manual, leaves a lot of room for errors that are quietly ignored. And forces you into using a shell that comes with its own set of gotchas, bash is not known to be a particularly good tool for security.

And to those who stay this adds flexibility: it doesn't. Those file descriptors are available under/dev/fd on linux, with named options you can do --pk /dev/fd/5. Or make a named pipe.


> Those file descriptors are available under/dev/fd on linux, with named options you can do --pk /dev/fd/5.

If you have a procfs mounted at /proc and the open syscall to use on it, sure (and even then, it’s wasteful and adds unnecessary failure paths). Even argument parsing is yet more code to audit.

I think the design is pretty good as-is.


It's 2025, dude. You can't be seriously telling me how difficult it is to parse arguments. It may be difficult in C, but then we're down another sick rabbit hole of justifying bad interface with bad language choice.

One open syscall in addition to dozens already made before your main function is started will have no observable effect whatsoever.


The context is what’s essentially a shell-accessible library for a minimal set of cryptographic primitives. It’s very reasonable to want it to be as lightweight, portable, and easy to audit as possible, and to want it to run in environments where (continuing on Linux for example) the open syscall to /dev/fd/n -> /proc/self/fd/n will not succeed for whatever reason, e.g. a restrictive sandbox.

Not involving argument parsing simplifies the interface regardless of how easy the implementation is, and the cost is just having to look up a digit in a manual that I certainly hope anyone doing raw ed25519 in shell is reading anyway.


Make a named pipe then. Shells have built-in primitives for that. I.e. <() and >() subshells in bash, or psub in fish. Or have an option to read either a file descriptor or a file.

I can't understand why you keep inflating the difficulty of simple commandline parsing, which the tool needs to do anyway — we shouldn't even be talking about it. Commandline parsing code is done once (and read once per audit) while a hostile user interface that bad commandline creates takes effort to use each time someone invokes the tool. If the tool has 1000 users, then bad interface's overhead has 1000× weight when we measure it against the overhead of implementing commandline parsing. This is preposterous.

> Not involving argument parsing simplifies the interface

From interface perspective, how is `5>secretkey` simpler than `--sk secretkey`? The latter is descriptive, searchable and allows bash completion. I'll type `ed25519-keypair`, hit tab and recall what the argument called.

You can't justify poorly made interface that is unusable without opening the manual side by side. Moreover, the simplest shell scripts that call this tool are unreadable (and thus unauditable) without the the manual.

  ed25519-keypair 5>secretkey 9>publickey
You see this line in a shell script. What does it do? Even before asking some deeper crypto-specific questions, you need to know what's written in "secretkey" and "publickey" files. You will end up spending your time (even a minute) and context-switch to check the descriptor numbers instead of doing something actually useful.

> which the tool needs to do anyway

It doesn’t. The tool has no command-line arguments.

Please learn how the various shell concepts you’re referencing (like <()) actually work and get back to me if you still need to after that.

In any case, I’m well aware of the readability benefit of named arguments, and was when I made the original comment. So as you can imagine, I maintain that it’s a more than reasonable tradeoff, and I’ve covered the reasons for that. If you have nothing (correct) to add beyond hammering on this point, save it.


You got me, it doesn't have arguments. Luckily, my argument did not critically rely on this bit, and it's still valid. Instead of occasional disconnected thoughts and vulgar attempts to insult, try to construct a complete, coherent argument for why you think your view is valid.

A suggestion on how you could approach it: try to make a table with 2-3 columns for the solutions you and I are comparing. And add a row for each aspect or characteristic you want to compare them with respect to; for example, usability, ease of implementation, room for error, you name it. In each cell, put either + or - if a solution is clearly managing that aspect well or badly, or a detailed comment. Try to express all of the things you're feeling and that are coming to your mind. My comments are written with a table like that in mind, they easily translate to one. Once you have made your table and established that we disagree on what some cell should contain or what rows/columns should be present, feel free to get back to have an actual discussion.


You’ve misrepresented or ignored all of my arguments, which are fairly complete as written. You can reformat them into a table for your personal use if it helps; I haven’t seen evidence that continuing into “an actual discussion” with you on this would have any value. (“It's 2025, dude. You can't be seriously telling me how difficult it is to parse arguments.” was a bad start, and while I’m on it: wrong and right, respectively.)

Haha, you got me again!

I honestly tried putting yours into a table and couldn't in a way that makes it look defensible. About 2025: I generally find a bit of cheeky tone appropriate for a dramatic effect, apologies if I offended you.


(nicer reply to this)

Yes, I’m aware of the readability benefit of named arguments, and made the original comment with that awareness too.

> Make a named pipe then. Shells have built-in primitives for that. I.e. <() and >() subshells in bash,

That’s /proc/self/fd again. But okay, you can make a named pipe to trade the procfs mount and corresponding open-for-read permission requirement for a named pipe open-for-write permission requirement without receiving the other benefits I listed of just passing a FD directly.

> I can't understand why you keep inflating the difficulty of simple commandline parsing

Not only have I not “kept inflating” this, I barely brought up the related concept of it being unnecessary complexity from an implementation side (which it is).

> which the tool needs to do anyway

It doesn’t. The tool has no command-line arguments.

> From interface perspective, how is `5>secretkey` simpler than `--sk secretkey`? The latter is descriptive, searchable and allows bash completion. I'll type `ed25519-keypair`, hit tab and recall what the argument called.

Not introducing More Than One Way To Do It after all (“Or have an option to read either a file descriptor or a file”) here is a good start, but it’s hard to beat passing a file descriptor for simplicity. If the program operates on a stream, the simplest interface passes the program a stream. (This program actually operates on something even simpler than a stream – a byte string – but Unix-likes, and shells especially, are terrible at passing those. And an FD isn’t just a stream, but the point is it’s closer.) A file path is another degree or more removed from that, and it’s up to the program if/how it’ll open that file path, or even how it’ll derive a file path from the string (does `-` mean stdin to this tool? does it write multiple files with different suffixes? what permissions does it set if the file is a new file – will it overwrite an existing file? is this parameter an input or an output?).

Your attached arguments seem to be about convenience during interactive use, rather than the kind of simplicity I was referring to. (Bonus minor point: tab completion is not necessarily any different.)

> Moreover, the simplest shell scripts that call this tool are unreadable (and thus unauditable) without the the manual.

That might be a stretch. But more importantly, who’s trying to audit use of these tools without the manual? You can be more sure of the program’s interpretation of `--sk secretkey` (well, maybe rather `--secret-key=./secretkey`) than `9>` if you know it runs successfully, but for anything beyond that, you do need to know how the program is intended to work.

Finally, something I probably should have mentioned earlier: it’s very easy to wrap the existing implementation in a shell function to give it a named-parameter filepath-based interface if you want, but the reverse is impossible.


I see, you are more focused on providing the core functionality in the simplest way possible from purely technical perspective, and less so on what kind of "language" or interface it provides the end user — assuming someone who wants an interface can make a wrapper. I can see that your points make sense from this perspective, the solution with FDs is indeed simpler from this viewpoint.

I, on the other hand, criticized it as a complete interface made with some workflow in mind that would need no wrappers, would help the user discover itself and avoid footguns. Your interpretation sounds like what the authors may have had in mind when they made it.

> who’s trying to audit use of these tools without the manual?

I'd try to work on different levels when understanding some system. Before getting into details, I'd try to understand the high-level components/steps and their dataflows, and then gradually keep refining the level of detail. If a tool has 2-3 descriptively named arguments and you have a high-level idea of what the tool is for, you can usually track the dataflows of its call quite well without manual. Say, understanding a command like

  make -B -C ./somewhere -k
may require the manual if you haven't worked with make in some time and don't remember the options. But

  make --always-make --directory=./somewhere --keep-going
gives you a pretty good idea. On the second read, where you're being pedantic with details, you may want to open the manual and check what those things exactly mean and guarantee, but it's not useless without the manual either.

it being option can be nice if you don't want your keys touching disk and need to pass it over to other apps.

it being default is insanity


I was wondering the same thing. My best guess is that is to guard against operator misuse. Like usb-a only plugging in one way. Anything that is secret will never accidentally print to stdout. String interpolation in bash with `—option $empty` might be safer than `8<$empty`. Have to explore more but yeah, this is a new pattern for me as well.


Another possible factor driving the decision to use numbered file descriptors: the logic to validate that a file exists (or can exist) at a given path, is readable/writable, etc. gets punted to the shell instead of being something the program itself has to worry about.


Those descriptors like 5 could be mapped to anything, including descriptor 1, stdout.


What a strange convention. I'm partial to minisign, which works on plain old files.


This little CLI is not meaningfully an alternative for signify/minify. Here's a good piece on signify from its author (who also comments here):

https://www.openbsd.org/papers/bsdcan-signify.html


I’m guessing it’s to support the test framework it’s built with?


support is fine. Being default is crazy


It's djb's web site so it's a djb design. With great genius comes great different thinking.


I play several sports across several teams and leagues. Each league has their own system for delivering fixtures. Each team has its own system of communication.

What I want is software that can glue these things together. Each week, announce the fixture and poll the team to see who will play.

So far, the complete fragmentation of all these markets (fixtures, chat) has made software solutions uneconomic. Any solution's sales market is necessarily limited to a small handful of teams, and will quickly become outdated as fixtures move and teams evolve.

I'm hopeful AI will let software solve problems like this, where disposable code is exactly what's needed.


That sounds more like a bureaucratic problem (access to data) than a software problem.


The article isn't talking talking about "industrial" in relation to user interfaces. It isn't talking about user interfaces at all.

Your consumer/enterprise/industrial framework is orthogonal to the articles focus: how AI is massively reducing the cost of software.


The "industrialisation" concept is an analogy to emphasize how the costs of production are plummeting. Don't get hung up pointing out how one aspect of software doesn't match the analogy.


> The "industrialisation" concept is an analogy to emphasize how the costs of production are plummeting. Don't get hung up pointing out how one aspect of software doesn't match the analogy.

Are they, though? I am not aware of any indicators that software costs are precipitously declining. At least as far as I know, we aren't seeing complements of software developers (PMs, sales, other adjacent roles) growing rapidly indicating a corresponding supply increase. We aren't seeing companies like mcirosoft or salesforce or atlassian or any major software company reduce prices due to supply glut.

So what are the indicators (beyond blog posts) this is having a macro effect?


It's the central point of the metaphor. Software is not constrained by the speed of implementation, it's constrained by the cost of maintenance and adaptation to changing requirements.

If that wasn't the case, every piece of software could already be developed arbitrarily quickly by hiring an arbitrary amount of freelancers.


But focusing on production cost is silly. The cost to consumers is what matters. Software is already free or dirt cheap because it can be served at zero marginal cost. There was only a market for cheap industrial clothes because tailor made clothes were expensive. This is not the case in software and that's why this whole industrialization analogy falls apart upon inspection


I think part of this is that Status Page updates require AWS engineers to post them. In the smaller Tokyo (ap-northeast-1) region, we've had several outages which didn't appear on the status page.


You are absolutely write! Let me write a Go program to implement this idea. The bloom filters will take approximately 5gb (for a 1% error rate) and take a few minutes to populate on a modern Macbook Pro.

https://gist.github.com/alexjurkiewicz/1abf05f16fd98aabf380c...


ZeroTier supports relaying natively. You create a network with three nodes, the "client" (on the internet), the "gateway" (public subnet) and the "server" (private subnet). ZeroTier will automatically route traffic between the client and server through your gateway with no configuration.


It doesn't seem like the design of this experiment allows AIs to evolve novel strategy over time. I wonder if poker-as-text is similar to maths -- LLMs are unable to reason about the underlying reality.


You mean that they don’t have access to whole opponent behavior?

It would be hilaroius to allow table talk and see them trying to bluff and sway each other :D


I think by

> LLMs are unable to reason about the underlying reality

OP means that LLMs hallucinate 100% of the time with different levels of confidence and have no concept of a reality or ground truth.


Confidence? I think the word you’re looking for is ‘nonsense’


Make entire chain of thought visible to each other and see if they can evolve into hiding strategies in their cot


pardon my ignorance but how would you make them evolve?


I mean, LLMs have the same sorts of problem with

"Which poker hand is better: 7S8C or 2SJH"

as

"What is 77 + 19"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: