Unpopular opinion: Maybe the way to go is to create a separate Show HNs only for bots and put some instructions for the bots to follow, identify themselves and give them separate category. Similar to moltbook. If we can't stop it, maybe we could contain it in a dedicated space.
I'm not a fan of moltbots / openclaws (and any clones that popped up in the last moth). I don't use them and try to discourage their use. That being said, millions of them are running anyway...
I doubt that people would respect it, so we'd still have the problem of distinguishing wheat from chaff in the 'human' section, plus also have a 'bot' bucket to maintain.
Back in the 90/2000 the was a very popular tool named rrdtool to store metrics in a round robin structure on disk, especially suited for network metrics. The goal of the storage was to have a fixed size and cover only last NNN days, circularly.
I use rrdtool to this day, as a building block, but this project looks much better.
Thank you, this draft is literally perfect and I wish we had this years ago. Most people don't know about acmev2 account rekeying either. It is great you decided to use account uri instead of public key thumbprint.
Recently I wrote a simple acmev2 tool specifically for manual upfront acmev2 account creation, rekeying and getting TXT records on stout for dns-persist-01:
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
OP idea won't work unless OP will submit final certificate himself to CT logs.
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
Although the poisoned pre-certificates† are logged as a necessary part of offering the least hassle product which is the business Let's Encrypt are in, they, like most CAs, also log the finished certificate shortly after.
Here's the pre-certificate for this web site's current certificate:
This is good practice, but it's also just easier, because if anything goes wrong, and sometimes things do go wrong, when the trust store says hey, please provide all certificates you issued with these properties, if you've logged them they are right there published in the logs for everybody to see - no bother, no risk - if you haven't then you need your own storage and better hope there aren't any mistakes. I'm sure LE do have their own copies if they needed them, but it sure is nice to know that's not what you're betting on.
† Poisoned pre-certificates are a "temporary" hack so that the certificate logging system can be demonstrated. If we ever really wanted this of course we'd develop a proper solution instead, right? Right? Every experienced software engineer knows that "temporary" usually means permanent in practice and so nobody was surprised by how this turned out.
All I'm saying is that publishing final certificate is not required for the process, so just assuming it will be there is premature. User may end up putting precert on his https server and find the hard way.
The comment I made was explicit that this works for Let's Encrypt, you replied that it doesn't, apparently without checking the logs because if you'd glanced at them it's like 1:1 pre-certificates to actual certificates from Let's Encrypt and I explained that you're wrong.
I'm not disputing that there could be a world where you're correct, but, it's not this world, which is why I even made that comment. That doesn't make relying on the logs for this a brilliant idea, it's just an observation that in fact it could work.
Thanks for the service. Personally I would lower the TTL to 120 or less.
Dyndns is used for personal stuff. There is no point caching a FQDN almost nobody use. If anything, low TTL is a benefit for recursive resolvers like 1.1.1.1 or ISPs. Those FQDN should not be cached as there is zero benefit keeping them in their cached for one guy hitting it once per day.
I'm not a fan of moltbots / openclaws (and any clones that popped up in the last moth). I don't use them and try to discourage their use. That being said, millions of them are running anyway...
reply