Hacker Newsnew | past | comments | ask | show | jobs | submit | more nothrabannosir's commentslogin

Exactly. OP bought a lifetime license but expects a lifetime product.

Offer 99.something% guaranteed exactly-once-delivery. Compete on number of nines. Charge appropriately.

In public transport? Or are you changing the subject?

They might be making a distinction between the language and the current implementation. In fact I would call going through multiple different fundamental implementations without changing the semantics, an argument in favor of the maturity of the language.

Which makes yubikey impossible to use with geographically distributed backups. You need the backup available at all times for when you want to register with any new service.

This is why you should use a device which allows exporting the seed, like e.g. multi purpose hardware crypto wallets.


This is true for passkeys/webauthn/u2f, which is why it’s trash and a completely flawed and not fit for purpose standard (of course the primary purpose is vendor lock-in, not reliable and disaster-proof authentication).

But SSH allows you to export the public key and then you can enroll it on as many hosts as you want without needing access to the private key, so the backup key can remain in a safe, ideally forever as you should never need it.


I agree that it's inconvenient in many cases, but what vendor am I being locked into, exactly? My primary hardware key can be from a completely different vendor than the backup one, so I don't quite buy the conspiracy angle.

There's also no technical obstacle preventing anyone from creating "paired" hardware authenticators that share the same internal root derivation key and can as such authenticate to all services (at least if they don't demand resident credentials) that were registered to any of the keys in the set.

The fact that these keys don't exist on the market (I believe Yubikey looked into them a while ago) more is evidence for the lack of demand, and less for the existence of a cabal, in my view.


Being locked into a set of a handful of vendors who offer "secure" sync (of course, this is not a true PKI and actual key material is being synced, meaning it's only as secure as the syncing protocol and your authentication to it).

Authenticator implementations who allow exports outside of the vendor cartel are threatened to be blacklisted: https://github.com/keepassxreboot/keepassxc/issues/10407

> My primary hardware key can be from a completely different vendor than the backup one, so I don't quite buy the conspiracy angle.

The fundamental flaw is that enrolling an authenticator requires it to be present, making a backup strategy much less resilient as it means your secondary device needs to be constantly present and thus exposed to the same local/environmental risks the primary one is (theft/fire/faulty USB port that fries everything plugged in and you only realize after nuking both your keys). It makes an offline backup scenario like with SSH (where you copy a public key and otherwise leave the authenticator out of reach in a safe place) impossible.

Making it hard/impractical to maintain a reliable backup yourself sure makes those proprietary sync-based services attractive, which also doubles as reducing security since key material is being synced and can potentially be extracted (impossible with a true HSM + PKI implementation).

> preventing anyone from creating "paired" hardware authenticators

Don't certain types of keys involve writing something to the authenticator, fundamentally preventing this (as the backup authenticator won't get this written value)?

> cabal

It doesn't have to be explicit coordinated action like intentionally wanting to prevent people from self-managing passkeys (in fact any hint of it being intentional would be a liability in a potential anti-trust situation, so that's a big no-no); it can be done by simply omitting this scenario, by accident or for "security" purposes, or deprioritizing it to hell. In fact the Credential Migration spec is still a draft and appears quite recent, despite passkeys being heavily promoted for a while: https://fidoalliance.org/specs/cx/cxp-v1.0-wd-20241003.html - you'd think that such as basic feature would be sorted before the push to switch to passkeys no?

In fact you see this exact strategy playing out: https://github.com/keepassxreboot/keepassxc/issues/11363#iss...

> For the initial delivery of Credential Exchange, we focused on the most wide use case [emphasis mine]

"Initial" delivery focuses on the most widespread use-case (how convenient it also happens to be the most corporation-friendly use-case), with everything else coming "later", meaning never. I'm sure it'll rot in some Jira backlog as a liability shield so they can promise they did plan for it and just never got around to it, but everyone understands it will never actually get implemented.


How can the "cartel" "blacklist" anyone? The only thing the FIDO alliance can do is not include a vendor's attestation key as trusted in their vendor database, and software solutions aren't on that list to begin with.

> The fundamental flaw is that enrolling an authenticator requires it to be present [...]

Yes, but that doesn't mean you can't backup the full authenticator state.

Here's a toy WebAuthN implementation that is backed by a passphrase that you remember or write on a piece of paper which works on many websites supporting passkeys and not enforcing attestation (which is the vast majority, since Apple, Google, 1Password, and Bitwarden all don't support attestation for synchronized credentials a.k.a. passkeys): https://github.com/lxgr/brainchain

> Making it hard/impractical to maintain a reliable backup yourself sure makes those proprietary sync-based services attractive

It's also completely open source and can be backed up :) (But again, it's a toy demo – don't use it for anything sensitive!)


> How can the "cartel" "blacklist" anyone?

All they have to do is publish a "best practices" statement or some RP certification program mandating attestation to be used (and some PR around how only "certified" RPs are secure) and job done. The only reason they didn't do that yet is that Apple is refusing to play ball and support attestation (but this may change).

The threat was clearly there in the original Github issue, and it's just a temporary inconvenience they can't currently follow through on it.

> Yes, but that doesn't mean you can't backup the full authenticator state.

Having the secondary authenticator present in the same vicinity as the primary one exposes it to risks. Having to dump authenticator state at regular intervals now means your backup authenticator must be reachable for writing online, so it can't be a simple "cold storage" backup like a Yubikey in a safe anymore. This also opens up security concerns since you're now dumping and syncing private keys left and right over a network and you lose the peace of mind of using an HSM-backed non-exportable private key where the HSM being unplugged guarantees nobody is currently using your keys.

Seems like a shit ton of complexity and effort to work around a problem OpenSSH elegantly solved 30 years ago.

> Here's a toy WebAuthN implementation

Thanks, I will check it out and read up on it. I'd be genuinely happy to move to WebAuthn if I could build my own hardware authenticators that allow the backup one to remain fully offline in a safe, and not have private keys flying around (if I'm doing that, it's not much of an improvement over syncing passwords - except those I can at least type or tell over the phone in an emergency when I need someone else to act on my behalf).

Edit: so it seems like I am mostly right? Only discoverable credentials count as "passkeys", and those generate per-site private keys, meaning offline, cold-storage backups are impossible. I guess I'm sticking to my password manager then since passkeys would provide no improvement in this case.


> Having to dump authenticator state at regular intervals [...]

Again, you don't inherently have to do this if you only use non-resident keys (which many sites allow; my hardware authenticator does not even support resident keys).

Synchronized resident keys are not the only possible WebAuthN implementation, even though they are getting currently heavily pushed by big stakeholders. The big advantage they come with, though, is that they lost hardware attestation in the process, so everybody is free to use their own implementation instead.

Thinking about it some more: I'm pretty sure that there are crypto wallets that support FIDO (or maybe just U2F, i.e. the predecessor of CTAP2?) as a secondary application, and they are almost always based on a passphrase you can back up and replicate across authenticators as you wish.

> Seems like a shit ton of complexity and effort to work around a problem OpenSSH elegantly solved 30 years ago.

There are very good reasons for requiring the private key at registration time and for mandatory per-site keys in WebAuthN/FIDO, which are arguably the two main differences between WebAuthN and SSH at a protocol level:

Global keys would be a privacy nightmare (as they would become global identifiers), and being able to register a public key without a private key risks both users accidentally registering a key they don't have access to (i.e. availability), and getting social engineered into registering somebody else's key that is not even physically present with them.

But again, per-site keys can absolutely be implemented without having to keep state on the authenticator, since they can be deterministically derived from a root secret.


AFAIK you do , because the hardware key must keep internal state which is also tracked by the server (a monotonically increasing nonce). Offering u2f without this afaik is not compliant and the only way to achieve that would be a central server which keeps state somehow. It’s really fundamentally unsolvable .

Not true. If you use YubiKeys to store your GPG key, it's not a problem. You can have multiple YubiKeys with the same private key, or you can encrypt to multiple recipients.

Are you talking about SSH or a different setting?

With SSH, you can always share the primary and backup pub keys, even if you don't have the backup key handy.


No I got distracted by the word yubikey. Arguably not the same subject. :)

Nonetheless I'm glad to hear about it. I don't yet use YubiKeys for FIDO, because I was concerned a bit about this enrollment process, and hadn't bothered to figure out what others do.

> You need the backup available at all times for when you want to register with any new service.

Not for SSH (at least using the OpenSSH sk implementation).


> Which makes yubikey impossible to use with geographically distributed backups.

Huh ?

You do know you can wrap a symmetric key with multiple asymmetric keys, right ?


Disagree with this just because it makes everything else easier. The more you stick to common key bindings, the more intuitive various packages will be. Eg navigating lines vs blocks in a magit diff block is C-n and N respectively. Copying a full hash is M-w. All these bindings are intuitive “overlays” on conventional bindings.

Emacs shines when packages combine to form a whole greater than the sum of its parts. Changing basic key bindings is the quickest way to vitiate that symbiosis.

Unfortunately.

And while they may be old school, traditional, and orthodox, they are by no means idiosyncratic. They’re widely supported: readline , bash, everywhere on macos, even modern browsers. Eg you can actually paste in bash: try killing something with C-w or C-k, and paste it back using C-y. Or transpose arguments using C-M-t. Navigate suggestions in Firefox using C-n and C-p. Bash even supports undo using C-/.

All to say: learning emacs movement keys pays off.


Firefox opens a new window when I press C-n. Is this a setting that you have to enable?

The op is probably on Mac where emacs movements use Ctrl but FF and other apps use Cmd key, so they always work as there’s no conflict.

Is there a way to enable such a mode on a Linux Desktop Environment, so most mnemonics use Super- instead of Ctrl- ?

Honestly being able to use emacs movements everywhere is one of the reasons I stay on MacOS.

> Changing basic key bindings is the quickest way to vitiate that symbiosis.

Unless you change those as well

> All to say: learning emacs movement keys pays off.

It can also cost you RSI, so not worth it


I've found it crucial to have Control mapped to the keys immediately next to and on both sides of the spacebar. Thumbs are stronger than pinkies for modifying keypresses.

Yes, that's a good idea since Control is the most common keybinding modifiers, so helps in other apps as well (likewise, using Cmd on a Mac would be preferable to using the literal Control)

Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…

Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.

https://en.wikipedia.org/wiki/Appeal_to_tradition

Just because metadata is useful doesn’t mean it needs to live in the filename.


If the alternative was putting the information in some hypothetical file attribute with similar or greater level of support/availability (like for filtering across various search engines and file managers) then I'd agree there's no reason to keep it in the file extension in particular, but I feel the alternative here is just not really having it available in such a way at all (instead just an internal tag particular to the JXL format).

I’ll bite : how does open source have nothing to do with a comment discussing “freedom oppressing software” and “Stallman”?

To be honest, I couldn’t imagine a word more related than "open source". Isn’t that junction literally the acronym F/LOSS?


Since you're speaking as a moderator I'd like to ask for clarification on the official position:

Was that actually a personal attack, or was it a verifiable claim about the quantity and type of submissions by this user? Is the problem that it was labeled "propaganda", and would it have been ok without that word?

I thought it was useful context to have a look at the submission history. There is a slew of recent [dead] submissions. At what point is it fair to call that out? Or is it about the wording?


It's generally not OK to bring up someone's past activity, whether that activity be on HN or elsewhere, as way of attacking someone in a discussion on HN. It fits within the "generic tangents" guideline. We can never know if they still agree with what they said or posted in the past. The submitter's history, and indeed the submitter's identity, is not really relevant to the substance of the article, and we want the discussion to be about the substance of the article. (Of course it's relevant if the submitter is the author, because then they can engage in Q&A about the article's content.)

If users notice that someone is posting large volumes of low-quality content (i.e., spam, self-promotional content or articles that break the guidelines) they can email us and we'll investigate.

In this case the user in question just posts a lot of stuff from mainstream publications on either side of the ideological centre – i.e., lots from the NY Times, Washington Post and The Atlantic but also WSJ and Bloomberg. The articles that are [dead] are from sites like The Information that are only banned due to being hard-paywalled.

It's obviously inflammatory to describe their pattern of posting as "propaganda". (Sure it can be argued and debated in the right context, but this is not that.) But even without the word "propaganda", the guidelines still ask us to keep discussions on-topic and to avoid generic tangents.


Ok thanks, makes sense.


> softwareC # available in nixpkgs, but because nixpkgs maintainers are hardline purists it takes 15 minutes to compile from source and ain't nobody got time for that

Which package is that? Is it proprietary but source available? Any free software which is built from source is built by hydra and available from the binary cache to downstream users.


Terraform is a notable example yeah. Takes like 7 minutes to compile it when you would get it in seconds by pulling the binary


When Hashicorp did their rug pull, I just switched my team to OpenTofu as soon as it had a stable release. No regrets; it's been great. (I did evaluate both projects at that time, of course. But it ended up being a clear decision.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: