My biggest issue with this whole situation, is that the user gets a better UX with no encryption whatsoever on a http:// site than a self-signed or expired cert on https://.
I know all the stories about MITM attacks, but the fact is, it is still MUCH easier to accidentally or purposefully log unencrypted http:// traffic in a passive manner than it is to actively spoof a HTTPS:// connection.
Especially for local LANs, but also for small websites, there should be a way to use TLS with a self-signed cert to say hey, I'm not making any strong claims of identity or privacy here, I just want some modicum of obfuscation of the traffic. Also, a user should be able to trust a specific cert once on first visit, and then be warned only if that cert changed.
Also, the fact that the entire world's infrastructure relies on some small, centralised non-profit in the USA (LetsEncrypt) makes me very nervous. Ordinary citizens can end up on the wrong side of sanctions through no fault of their own...
> My biggest issue with this whole situation, is that the user gets a better UX with no encryption whatsoever on a http:// site than a self-signed or expired cert on https://.
I think part of the issue was that priorities shifted. Initially, SSL was for commerce. You were looking for assurance that your credit card number wouldn't be captured in flight, and would go to the right, verified party.
In this context, a self-signed certificate is somebody claiming "trust me, I'm a bank manager" despite being unable to prove any association with your bank. And somebody with an expired ID might be a former, disgruntled employee whose ID has expired because they don't work there anymore. Both of those are far more suspicious than somebody not claiming to be anything in particular.
> Especially for local LANs, but also for small websites, there should be a way to use TLS with a self-signed cert to say hey, I'm not making any strong claims of identity or privacy here, I just want some modicum of obfuscation of the traffic.
That's not really reliable. The moment you install that as a norm, you'll have various countries doing MITM and generating their own cert for the site. Any confidence such a scheme provides is extremely suspect, as it relies on nobody exploiting the flaw for personal benefit.
> Also, a user should be able to trust a specific cert once on first visit, and then be warned only if that cert changed.
That only works if you think your attack mode is a hacked/malicious access point at a cafe or hotel. If you're in a place like Russia or China the state has the resources to ensure you always get the same MITM-ed cert, unless you play games with VPNs, which may well land you on some sort of watch list.
As I said, I am well aware of the perils of MITM.
There are mitigations for all your concerns, and in each case, the question should be: is this better or worse than plan HTTP.
As I mentioned below, mitigations include: restricting this scheme to IP Addresses only, non-routable netblocks only, certain TLDs like .local, .lan .personal etc...
In regards to oppressive regimes, the state can also block all traffic unless you relent and install their CA cert in your browser bundle.
Mitigations here would be certificate transparency, pinning etc...
I would also suggest that CA certs should be restricted to certain TLDs.
Important websites can use all these mitigations, while still allowing my scheme for connecting to my Raspberry PI, kitten blog, or wifi router.
> As I said, I am well aware of the perils of MITM. There are mitigations for all your concerns, and in each case, the question should be: is this better or worse than plan HTTP.
I think it would be initially better, then gradually become worse. And that's a horrible thing when the public is concerned.
There's still people out there concerned about the "memory effect" for battery charging, and recommending a full discharge every time, even though that advice has been obsolete for decades now due to different battery chemistries. But the public easily latches on simple advice and doesn't consider the technical reasons for it.
So I imagine the same would be the case here. You'd have a marginal improvement for a short time, until the situation changes and suddenly people have to absorb "Yes, this was fine in 2023, but now is a complete no-go in 2026".
Since we're considering UX here. What UX do you propose that would reliably tell my nigh computer illiterate mom what to do with "the self-signed certificate for this site changed" if she receives it at a hotel while traveling? And what if she first opens the site in a hotel abroad, then comes back home and gets it there? How are non-experts supposed to untangle that?
Also, today, if your Mom visits google.com, for the first time, and the hotel blocks port 443... guess what? It will try to connect to google.com using HTTP on port 80... at which point the hotel can inject whatever they like.
In terms of UX, in my scenario, if they "really" wanted to, the browser could fake a HTTP:// scheme along with the crossed out lock icon, effectively identical to the status quo in terms of UX, but with improved (not perfect) privacy.
I expect that to go away eventually, by browsers simply not allowing HTTP for non-local requests. This means a hotel doing this looks like one where wifi just doesn't work.
What kind of router, raspPI or kitten blog will you Mom be visiting at the hotel that would be of any importance?
As I said, I wouldn't suggest allowing this scheme for anything important, such as Google, banks etc...
To answer your question, I don't think the TLS cert should ever change for these kinds of non-identity certs. if they do, the standard warning can apply.
My point is that there should be an escape hatch to provide resilient solutions to narrowly defined use cases, such as hobby websites, wifi routers etc... that don't trains users to bypass security control like the current scheme does.
Right now, if I want my Mom to configure her Huawei wifi router, she has the choice between sending her password in plain-text to a trivially spoofed website, or being trained to ignore TLS warnings and overriding those warnings in her browser, before sending her password into a still-spoofable website.
> What kind of router, raspPI or kitten blog will you Mom be visiting at the hotel that would be of any importance? As I said, I wouldn't suggest allowing this scheme for anything important, such as Google, banks etc...
That is part of the problem. How is she supposed to know what's okay or not in what context?
The simplest answer that ensures security as well as we can is to simply not give the user any options. Everything must be encrypted, nothing can be self-signed, plain HTTP support is disabled.
> Right now, if I want my Mom to configure her Huawei wifi router, she has the choice between sending her password in plain-text to a trivially spoofed website, or being trained to ignore TLS warnings and overriding those warnings in her browser, before sending her password into a still-spoofable website.
I think that's solvable, and may have already been solved.
I think current ASUS routers already contain a valid certificate for something like router.asus.com, and the router responds to HTTPS on that address. Or something along those lines, I've not really interacted much with it.
But the point is that in such a scheme, the router would contain a valid cert, issued by a valid authority.
> There's still people out there… recommending a full discharge every time, even though that advice has been obsolete for decades
The battery UX should hide this implementation detail from the user. If 20/80% is 0/100%, just do that and provide some override setting for "power users" so to speak.
I’ve always found it weird that expired certs are treated like a doomsday scenario by the browser. Technically they aren’t valid, but if it is just because they timed out the level of threat seems low. The only danger is that they might have been on a CRL that also trimmed the certs when they expired. In practice CRLs are barely used.
LetsEncrypt is not the only ACME provider, and there are hundreds of regular CAs. Nothing about TLS certificates is centralized, there's just market concentration as a result of good UI/UX by LetsEncrypt, but there are open-source ACME implementations available, the protocol itself is in the process of being standardized and nothing keeps other CAs from running the same service, and in fact many are planning to do just that.
You're right that there are additional ACME providiers, but the reliance on just a handful of default root cert stores is what makes HTTPS centralized, even if TLS isn't.
I'm using strong protection in Chrome and it prevents me from browsing HTTP websites unless I explicitly allow it.
So while you're correct, it's just a historic issue and soon browsers will treat HTTP like bad HTTPS but not in a sense you're proposing but rather forbid both.
> Especially for local LANs, but also for small websites, there should be a way to use TLS with a self-signed cert to say hey, I'm not making any strong claims of identity or privacy here, I just want some modicum of obfuscation of the traffic. Also, a user should be able to trust a specific cert once on first visit, and then be warned only if that cert changed.
This is asking for trouble. If a site presents itself as https://foo, it should be foo according to global norms of what that means. No self-signed certificates.
What I think is needed is a way for a site to make a claim that it can prove in a decentralized way. Here are some examples of ways this could work:
The device has a certificate (with no expiration!) identifying it, signed by the vendor. The vendor provides a new kind of certificate saying that the device cert matches the serial number. The vendor refreshes this certificate periodically. There are thorny issues involving keeping this efficient, revoking problematic certificates, having the client (which likely has Internet access, but maybe not if the device is a router, for example) refresh the certificate if the device itself can’t, etc.
The origin could be literally a hash of the device private key. Sure, it’s not human readable, but it could be bookmarked. To make this work, routing info needs to be added too, giving, perhaps:
> * My biggest issue with this whole situation, is that the user gets a better UX with no encryption whatsoever on a http:// site than a self-signed or expired cert on https://.\\*
I think that's a well-acknowledged issue but those who disagree on less-severe expiry warnings would simply argue here that it would be preferable to be strict in both cases and that the lax approach to http:// is just legacy baggage we should work toward getting rid of.
Personally, I'm a little on the fence. The expiry UX could definitely be "improved" (made more helpful) without making it more lax. Another thing to consider is that going from lax to strict is MUCH MUCH harder than going from strict to lax, so easing severity now would make it difficult to undo that change later.
That legacy baggage is the only thing that allows older hardware to connect to the modern network. It's the only thing that allows folks the agency and autonomy to setup their own services and share them with folks locally, without requiring the blessing and grace of a distant 3rd party authority.
I've spent 20 years working with advanced PKI and cryptography in many different domains and form factors, and what I've learned is that even with the best of intentions, they are all fragile and their default state is broken, without constant maintenance.
Availability and resilience to failure are key pillars to security that are often overlooked.
In the past 20 years, all of the critical failures in PKI systems that I have seen were due to expiring certs, expiring CRLs, failure to distribute new PKI in time, accidental deletion of key PKI, missing intermediate certs. None were due to MITM, weak crypto, spoofed packets, use of plain HTTP. Make of that anecdote what you will.
> In the past 20 years, all of the critical failures in PKI systems that I have seen [...] None were due to [...] use of plain HTTP.
Not sure how a PKI failure specifically can be due to use of plain HTTP, but I assure you there's been plenty of other very real security failures over the past 20 years due to use of HTTP.
> That legacy baggage is the only thing that allows older hardware to connect to the modern network.
This sounds like legacy baggage, yes. The term "legacy" is not a value judgement. It doesn't mean "bad", it just means "old".
Here is a thought experiment for you:
Right now, I have a 45 year old rotary telephone working fine in my living room, hooked up to VOIP with an adaptor.
In 40 years time, how will anyone be able to make use of my "antique Internet Radio / Amazon Alexa"?
Virtually zero appliances / embedded systems sold today allow you to configure the CA bundle. Even Android is locking this down bit by bit as they don't want anyone peeking at all the surveillance traffic their Apps are sending to the internet.
Your 45 year old rotary telephone could also have encrypted the numbers you're dialing. Buying user-hostile devices is what leads to user-hostile behaviour.
Apps needing to opt into CA certificates are an annoyance for sure but in 45 years the API those apps are talking to won't be running anyway. You'll still be able to buy WiFi adaptors for whatever tech we'll use by then to physically hook up your current devices, but the network itself won't work unless you set up a server for yourself.
Your converter box is similarly difficult, an old "speaker hooked up to a wire" protocol has been converted into a fully fledged Internet appliance. The POTS services that the phone wants to connect to are no longer there, you need to spoof them; the same will be true for the smart crapware we buy today.
> Here is a thought experiment for you: Right now, I have a 45 year old rotary telephone working fine in my living room, hooked up to VOIP with an adaptor.
The adaptor in your analogy sounds like it could be analogised by a local transparent proxy.
A more apt formulation of the analogy would be phone companies persisting DTMF to avoid the need for adapters.
----
<off-topic> What adapter do you use? I also have a rotary phone & have been struggling to find a good one...
A local proxy in this analogy would have to be able to MITM the traffic... which is unlikely to work with an Alexa. I'd like to see the EU mandate customer configurable CA bundles, but I won't hold my breath!
---
https://www.dialgizmo.com/
A bit finicky, but does what is says on the tin ;-)
If the definition of "older hardware" is closed saas-supported media products then I guess this is a different discussion than I thought. I'd be surprised if the SaaS support lifespan of things like Alexa would even be long enough for the hardware to be in any way usable after reaching an age considered "old" but... if it does, then I'd suggest the sibling commenter's point about selection criteria fits here.
> * I'd like to see the EU mandate customer configurable CA bundles*
Agree but I'd go broader - right to flash or some kind of general firmware/os/softwate openness mandate would be nice to see.
> http:// is just legacy baggage we should work toward getting rid of.
I, the server, should decide what protocol to use, not the client i.e. some software provided by two of the largest World corporations, which are also, by sheer accident I suppose, American.
I can send encrypted content over that insecure channel that only some receiver could decrypt and read.
It's none of Google's or Apple's business like it wasn't Microsoft's business to impose their browser and their standards on all of us.
> I can send encrypted content over that insecure channel that only some receiver could decrypt and read.
We've tried this approach with email and has not resulted in a world where I can easily send secure emails to anyone I know.
Even setting aside the problem of inconsistent clients, you're asking for a world where every server re-invents wheels & you haven't even begun to think about solving for authentication (which is a very hard problem even with TLS)
I'm simply saying that HTTP is perfectly fine and it's not legacy.
Of course it's easier to pay for a certificate from a certification authority that maintains the infrastructure, and no, Letsencrypt is free only on the issue side, but maintaining HTTPS has its warts (for example: renew the certs every 3 months!)
but the problem is not HTTP, HTTP in the hands of people who know what they are doing is completely okay, if browsers ban HTTP I predict an explosion of protocols like Gemini or something similar
A lot of low power devices don't need or can't handle HTTPS and there's no problem if what they do doesn't need security nor identity verification.
Meanwhile it's baffling that we are pushing for internet non-public non-state-run identity authorities, while in UK, Japan, Russia, USA and many other countries such an authority don't even exist for real people...
> it's baffling that we are pushing for internet non-public non-state-run identity authorities, while in UK, Japan, Russia, USA and many other countries such an authority don't even exist for real people...
This I'm fully onboard with. We absolutely need to be more active in moving away from this approach of centralised authorities - there's unfortunately no rreal candidates for this outside of the blockchain space. I think we're stuck in an awkward time where many "I need an alternative to centralised systems" innovators end up turning to blockchain, which inevitably leads to vapourware. Hopefully that tendency disappears soon.
Otherwise though, you seem to be avoiding the elephant in the room with HTTP.
> there's no problem if what they do doesn't need security
The fundamental problem is that users need security, and implementers are tasked with making this decision on behalf of users (users don't "choose" to use an unencrypted protocol on the web). Implementers have historically not been the best stewards of user needs. IOW: there are far too many cases of things that do need security where implementers don't believe it does.
> there should be a way to use TLS with a self-signed cert to say hey, I'm not making any strong claims of identity or privacy here, I just want some modicum of obfuscation of the traffic.
How would a browser know whether a site presenting a self-signed cert is one where no strong claims of identity or privacy are needed or whether it's one where strong claims of identity are needed but which has been MitM'd.
Also, do this without the browser talking to some "mothership" to ask about what a domain should be treated us because that would leave that party in a position where it gets to all of your browsing history.
Requiring a remote 3rd party to bless a transaction in order for two computers on a local network to talk to each other, is tyrannical and anti-resilient.
DNS? crt.sh? Certificate Pinning? Apply this only to non-routable IPs? Apply this only to certain TLDs, such as: .local .lan .personal
There are many options.
Also, how would this be any worse than visiting a plain HTTP site instead?
DNS can be MitMd. crt.sh would be in a position to get all your browsing history.
The local thing would work, but of course only for local hosts.
It would not be worse than using plain http and my personal opinion is that visiting a plain http site should have the same UX as visiting a self-signed one.
All fair points, and I would settle for HTTP equivalent UX.
DOH and DOT would go some way to mitigating DNS inadequacies (although they have their own issues in terms of network autonomy).
I personally think the best long term solution would be for each TLD to maintain the CA bundle and TLS standards for that TLD.
That way there is no case where a CA cert from CN can issue a cert for google.com .
It would also specifically allow non-identity locally issued certs for .local, .lan, .hobby etc...
You still can have untrustworthy page and user still has to make that determination with TLS or without. If you connect to bad guys server you still will get owned.
Problem is that technically *user should not make that determination* - for casual user TLS is transparent. Which takes burden of technically knowing if traffic was or was not MiTmed out of the question for end user. End users should make less technical decisions because they want to browse websites - not worry about if someone is injecting stuff in their traffic.
Your mention of us all using LetsEncrypt and similar is beyond my understanding of cryptography but why do they need to be signed by a central authority exactly?
Ultimately every cert is signed by a Certificate Authority. This is a "Trust Anchor". An authority that you trust implicitly. Your web browser maintains a list of these trusted parties, which are measured in the dozens and only change occasionally after careful scrutiny by Browser vendors.
If your cert is not signed by one of these CAs, there is no way to verify it's veracity. That is why the browser gives a scary warning. I could issue a cert claiming to be google.com without any deterance.
Until recently all such authorities charged a fee to issue you a verified certificate. Also the process was usually not fully automated and required human intervention to renew a cert.
LetsEncrypt was a major innovation for two reasons:
1. They provided the certificates for free, no strings attached
2. They provided a fully automated and optimized process to issue/renew/deploy the cert
This had the effect of making HTTPS accessible to everyone, and is the reason that HTTPS has become the default rather than only being used for a small fraction of websites (e-commerce etc...).
Overall this has been a positive development and has raised the bar against mass-surveillance across the world. However, the downside as mentioned, is that much of the world's infrastructure now relies on this small company. Since the certs are only valid for 3 months, any blockage in that renewal process means rapidly failing services.
The other answer to your question seemed to me to have guessed wrong what you're concerned about. My guess is that like a lot of non-experts, your thought was "Why do we need this CA role?" and that, fortunately, is something where I can appeal to your intuitions rather than needing some mathematical proof about cryptography you won't understand.
This is about identity. How can we (and everybody else) agree on the identity of something? Is "Chris Pratt" the movie guy we've both heard of, or is it some Belgian guy's friend's brother you met once at a party? The Screen Actors Guild insists its members all have distinct names so you can tell them apart. If your real name is Clint Eastwood and you go into acting, too bad change it or you won't be allowed to work on most stuff with union rules. You don't need a legal change of name (although if you're a serious actor you might decide it's less bother to get one) but you must use a name distinct from those already in use in the industry.
Naturally there can't be some objective "truth" to a name. People may say "She looks like a Deborah" but that's not really how it works - when we find someone in a coma with no ID we don't go "Oh, he looks like a Jim Smith, of 420 Springfield Crescent", we have to put out a public appeal with photos. If I show you a web page it may look like Wikipedia, but I can trivially do that myself, so the real Wikipedia is the one everybody agrees on, and if for some reason we all agreed tomorrow that's not Wikipedia, it wouldn't be.
So, with no objective truth† we have to instead have an authority, and for everybody's convenience we should all trust at least roughly the same authorities, so we're all agreed about who we're talking about
† We can use cryptography to "assign" things names, but these names aren't very satisfactory, that's how Tor's private services work, which is why they have ugly names like facebookwkhpilnemxj7asaniu7vnjjbiltxjqhye3mhbshg7kx5tfyd.onion -- notice that all those letters are crucial, facebookwkhpilnemxj7asaniu7vnjjxiltxjqhye3mhbshg7kx5tfyd.onion is one letter different and would be a different Tor service not operated by Facebook.
> How can we (and everybody else) agree on the identity of something?
we do, I rephrase it, billions of people do it all the time everyday on WhatsApp.
It's called TOFU
The first T means Trust.
Another example: Protonmail, it uses PGP, it works.
The important thing for privacy is the encryption part, not the identity part.
Even more so when we all know that full fledged HTTPS site put TENS OF MEGABYTES of garbage on their web pages to track people.
Identity: I want it confirmed if I'm talking to my bank, but why the bank cannot buy a 10 year certificate it's a mystery to me, I sure hope they'll still be in business in 10 years time from now, at least they should be able to not think about this minutia so often.
> why the bank cannot buy a 10 year certificate it's a mystery to me, I sure hope they'll still be in business in 10 years time from now, at least they should be able to not think about this minutia so often.
There's no more reason they should "think" about this than, say, testing fire extinguishers, it's just routine maintenance, it is presumably somebody's job to ensure all the routine maintenance gets done. If you're holding a meeting about the certificates on the web site, rather than knowing that's maintained and monitored properly as part of normal operations, you screwed up.
Now, why does it need maintaining? Why not have them issued for 10 years (so, longer than many employees will work for the bank) ? Well the lifetime of a certificate in the Web PKI is in practice the best possible agility we can achieve for the entire Web PKI, so the longer the maximum lifetime, the slower we're able to fix any problems.
If the bank's new certificate today is valid for 10 years that means if we sunset things which are a terrible idea tomorrow they are still polluting the ecosystem until at least January 2033. A new browser, written by a team who are all in primary school today, might ship in 2033 and yet it's expected to put up with every weird thing we're still allowing, even if it's known to have been a bad idea for about a decade by then.
Currently the rule is 398 days, so if we outlaw something tomorrow, it's no longer a problem by the end of February 2024. More realistically, if we argue about it for a few weeks, and then agree to ban it from May 2023, it's no longer a problem by the second half of 2024.
> nothing prevents reissuing new certificates before expiration, if necessary.
So you want a product advertised with a 10 year lifespan, but sometimes it fails much earlier? I guess I have great news, you can use the existing product this way, although everybody you work with may find you exasperatingly incompetent.
The CA signing provides different levels of validation. DV (Domain validation) certificates require demonstration of control over the dns record(s) to the CA, ensuring the (IP address) server responding to your request has demonstrated control of the domain name by which you addressed it, to the CA.
Let’s say your local dns is poisoned to resolve to a nefarious server at a specific IP address for ss64.com, that server’s certificate won’t be signed by a CA (unless they also controlled ss64.com’s dns records, in which case there would have be no need to poison your local dns). Your connection to this server can still be encrypted via a certificate, but the CA won’t be providing validation of their identity or affiliation with the real ss64.com.
The signature on the public key matches what domain it should resolve, and said public key pairs with a private key that encrypts your data. Assuming you don't a) mishandle the private key, b) the Signatory has a reasonably process to assure that person with the private key should be able to encrypt the domain, and c) all potential trusted Signatories can be trusted, then you can reasonably assume the site you're visiting is legit.
The actual encryption doesn't need identification. Doing so would ensure others can't listen to your conversation, but wouldn't help if the person you think you're talking to isn't actually the right person. This is a realistic problem, because there's nothing in DNS or routing that ensures trust.
> there should be a way to use TLS with a self-signed cert
Well, there is, all (afaik) current browsers have some kind of barely visible "yeah, I know, take me there any way" button buried under a click or two on those cert error screens. The only exception to this that I can recall are servers that use an old version of TLS like 1.0, and in those cases, there is a browser flag that lets them load.
Sure, what I had in mind was a more user-friendly UX.
For example, if I could add something to the certificate subject to say: "This is an obfuscation only cert, not claiming identity or MITM resistance".
Then, the browser would either show the address bar like a HTTP page, or maybe show a notification instead of a block page, saying: "This is the first time you have visited this page. Identity is not verified. Trust identity?"
> "This is the first time you have visited this page. Identity is not verified. Trust identity?"
Users will click random buttons until the popup disappears. You can watch it happen in real time when you help the elderly with computer issues and don't say anything. Every option from cancel to close to OK is tried as whatever is happening doesn't work. Reading the error message itself is an action of last resort.
There's a good reason you can only bypass certain TLS errors by typing "thisisunsafe" into the error screen in Chrome; people just clicked the ignore button until the problem disappeared and then ran into trouble.
It is probably easier to give a browser a fake DNS result (ie, a MITM attack) than to do the same to the letencrypt authorizers.
Fake DNS result/MITM is one of the things that the SSL cert is supposed to guard against. Possibly the only thing that a domain-validated cert has going for it over an anonymous cert. Allowing a domain cert to be "renegotiated" from the browser would seem to defeat the purpose of having a domain cert at all.
Just this week I had an issue with a Letsencrypt cert that wasn't updated.
All of my users had visited the site and the certificate was the same. Browsers should give a less dramatic response if they have already seen the certificate and it simply expired. It's completely different from visiting a new site whose certificate the browser has never seen.
On the other hand, especially when offering an online service, cert monitoring and/or robust automation are essential. Blaming browser behaviour is missing the point in my opinion.
We have a repeatable build environment at work. However we can't rebuild anything from 2017 or before anymore because the environment back then doesn't have any valid certificates to validate servers downloading artifacts from, or that the artifacts signed back then are correct. Thus our build environment fails to be repeatable for long because the old environment cannot be used or duplicated 100%.
Surprised to see no discussion of this in the context of browser extension signing, and Firefox's little fiasco. Apparently, Mozilla's best judgment is that, after expiry, the extensions should just stop working with no way to restore them ... even though this meant forced disabling of privacy features, which could have outed people and got them killed.
The key to all of this is user awareness, and I'm not sure users care very much. I own a small payment processor, and we've researched this quite a bit: users only really care about if the little clock is present in the URL bar, and they leave if they get warning dialogs. All the other stuff, badges, changing the awesome bar color, have very little effect on users. Users are about 80% sensitive to interruptions that say "not secure" or something else scary.
There are a couple things about certs that are very much arbitrary: expiration date and any user input identification data. CAs try to deal with the identification data by doing some kind of validation, but that has decayed to what amounts to prove you control the domain by doing some trivial thing. Expiration dates, as currently set, serve to ensure that people have to renew their certificates, and prior to Let's Encrypt, this guaranteed recurring revenue for CAs. Expiration dates make sense to ensure that at some point, a compromised certificate would have to be replaced, but the way those dates are set is... very convenient for ARR.
Wow, the tone of the discussion here is, um, disappointing. I see people defending two extreme positions, both of which are indefensible, and no one (so far) actually tackling this problem in any kind of constructive way.
On the one side, there are the people who say that an expired cert should be a hard error because security is too important for any kind of compromise. On the other side there are people saying that an expired cert (or self-signed cert) is better than nothing, and so a user should be allowed to proceed with a warning.
What neither side is acknowledging is that there is no one-size-fits-all solution because different sites have different threat models. HN has a very different set of risks associated with it than your bank.
Obviously, if you go to your bank's web site and it presents a cert that expired five years ago, you should probably not be allowed to proceed.
On the other hand, if you go to your family's static HTML photo site you should probably be able to access it without any encryption at all. Even sites like HN or Reddit are probably safe to visit unencrypted most of the time.
In between is a vast ocean of grey. Example: shortly before midnight you log in to your bank's web site to pay your credit card bill, which is due the next day. As you fill in the form, the clock ticks past midnight and the cert expires. It is the exact same cert that was valid five minutes ago when you logged in. Should you really be blocked from completing the transaction?
IMHO, at the very least certs should have two expiration dates, a soft one, at which point users get warnings, and a hard one, at which point the cert stops working. There are probably better solutions, but the idea that it's perfectly fine to visit a site one minute before midnight and unsafe one minute after is untenable.
Most of my comments mention the fact that the escape hatch should be limited to certain use cases, such as local networks, certain TLDs etc...
Tying the validation requirements and CA bundle to the TLD would be a useful strategy and would in fact increase security in most cases.
For example imagine the official Chinese government CA can only issue certs for .cn . The TLD could also mandate TLS v1.3 and the latest crypto algorithm.
This simultaneously protects the Chinese from Western interference, and Google from Chinese interference.
"Encryption only" never-expiring certs could be specifically banned for .com .bank etc... but allowed for .local, .lan, .hobby and plain IP addresses.
This increases security across the board without sacrificing autonomy.
Is there a better way for dealing with certs? Would a very very long expiry date be better, and a bigger emphasis put on being able to invalidate them? The number of times "it's always the certificate" pops its head up after an outage is on the increase (when it's not DNS or BGP!) :-)
We seem to be moving into the opposite direction, actually.
People fail to renew them because it is a very infrequent thing. At one point you could get certificates that were valid for five years. This was reduced to three, and is now even down to one year. If it is that infrequent, renewing the certificate becomes an ad-hoc thing, which is most likely poorly documented and easily forgotten about.
On the other hand, LetsEncrypt certificates are valid for 90 days, and I believe they want to make that even shorter. At that point the only viable way to deal with certificates is to set up tooling that will automatically renew it, solving the entire expiry issue in the process.
I think the opposite. A short expiry time ala LetsEncrypt, but with a process to "adopt" the new certificate. That is, the website can say, "I'm using this cert now, soon I'll be using that one".
Then the browser can be more strict with warning of unscheduled cert changes, and an expired-but-adopted cert is not a big issue and browsers don't have to be so alarmist about it.
Invalidation more or less doesn't work. Mandatory OCSP stapling could change that, maybe, but it also means your clients need to have much tighter time synchronization[1], and your servers need to be able to make the OCSP requests, and the OCSP servers need to have relatively high availability. An extended DDoS against an OSCP server in a mandatory stapling environment would effectively invalidate large numbers of certificates and be a real big mess.
[1] and no localtime bugs; I've worked with platforms that don't accept certificates where NotBefore interpreted in local time hasn't been reached. Which means you've got to let your certificates sit for hours before using them if you have customers in Hawaii or other pacific islands on this side of the date line.
At least certificate errors are big and in your face, unlike bgp and sometimes dns.
No completely and super hard disagree. An expired certificate is not a negotiable or "soft" error. What the hell is wrong with people today? It's not rocket science. Get your shit together or fuck off for the sake of everyone else. Nobody cares about all the layers of bureaucracy between you and renewing that cert. That's your fucking problem. Seriously no joke. Stop making this a "mere implementation detail" and you'll be fine. Cryptography is a razor sharp thing. Treat it accordingly.
I know you're being downvoted for the tone, but I agree entirely. Security is not something to sacrifice to gain less angry users. I do agree, however, with the sentiment that the UX surrounding security leaves a lot to be desired. In most cases we train users to ignore or work around security problems - we don't give them tools to solve and embrace them.
Disagree with your disagree. I understand there’s a recession and security people have to justify their salaries.
The most secure system imaginable is for your users to shut their computers and go outside. If you can’t provide security without usability, your system is worthless.
The truth is that users want products that feel secure, rather than products that are secure.
Here's the thing: Expired certificate warnings reduce security. Because they're excessively dramatic about a routine non-issue, people learn to ignore and bypass them. Now people won't head real certificate warnings.
Unfortunately, the browser security nerds don't understand human psychology, and are more scared of the fact an expired cert can't be revoked (a nearly pointless edge case) versus users ignoring all cert warnings entirely, which they do now. A classic example of engineers who don't understand their users.
> Security is not something to sacrifice to gain less angry users.
Of course it is - it depends on Capital-C-Context.
Sure, for the bank, the site you are supplying your credit card details, your email, etc - security is non-negotiable.
For hackernews, for reddit, and for similar sites, then security is something to sacrifice, once again depending on context.
I've trusted this certificate for the last 2, maybe 3 years. It's unreasonable to assume that 5 minutes past midnight on the expiry date, the cert turned from "completely trustworthy" to "100% certainty that this is a phish, scam or similar".
> I literally just said that I agree the UX is poor. Did you read my comment?
But I agree with that comment. The one I disagreed with is:
> Security is not something to sacrifice to gain less angry users.
Maybe I should rephrase (I'm a notoriously poor communicator) ...
Sometimes (like in the cases I pointed out), the security messages and warnings must be sacrificed because the practical security either doesn't matter (like hackernews) or hasn't been compromised (like the 5m after midnight example).
An expired certificate _is_ a soft error, and in most cases nobody gives a fuck. For example, if HN's certificate expired and my browser absolutely prohibited access to it on the basis of that, I'd switch browsers because there's literally nothing at stake if somebody is able to read my traffic to or from this unimportant site. There's even less at stake when it comes to the cryptographic security of some blog. I literally don't care if someone can read the blog entry as I download it from its publicly-accessible URL.
On the other hand, if my e-mail provider's certificate is expired, there's a little more at stake, and there are other services where the HTTPS security being broken can cost me money. Those I do care about.
I think what you are saying is that expiration is important. The reasoning "cryptography is razor sharp" is really hard to follow. Cryptography is precise, but what really would help people is understanding why expiration dates matter so much. Most people carry a driver's license, and have to renew it. We all know that nothing magically happened that day to change anything about the driver - so that expiration is bureaucratic. Why is the expiration date on a cert different?
The layers of bureaucracy is a barrier to adoption of better security practices, and is all of our problem because at some point, you are using someone's website or api that is insecure because someone had to get one more approval or get someone to click one more button and did not.
imagine applying the same medicine to other situations.
- you're two minutes late, your appointment has been canceled
- but I am here for the chemio. I drove 100 miles to be here.
- Get your shit together or fuck off for the sake of everyone else. Nobody cares about all the layers of bureaucracy between you and being on time. That's your fucking problem
> Your doctor let their medical license lapse. They are legally not allowed to practice medicine until they renew it
medical licenses don't arbitrarily expire every 3 months.
But anyway it's funny that medical licenses expire in some place.
Once a doctor, you're always a doctor, unless you do something wrong with your license and it gets revoked.
An expired license doesn't make your skills useless or you less capable.
If I had a stroke on the streets I would certainly trust a doctor to help me, even if the his license is expired (again, who let medical licenses expire? not even in USSR medical profession was so bureaucratic!)
Who gave the issuer of the certificates and the browser's vendors the right to decide if I can or can't _visit a website_ that has an expired cert?
and what's the matter?
we accept E2E encryption on chats that use TOFU, but we should "fuck off" web sites with an expired cert that hasn't changed, it's not been revoked, is exactly the same as before, providing the same level of security of before?
I don't understand this fixation, unless a lot of people make a lot of money out of this madness.
I mean , we all know that rotating passwords don't improve security, but suddenly making cert expire does?
people make mistakes, problems arise, if I need that website now and it's not available because CHROME or FIREFOX or SAFARI chose so, it's a problem for me.
I'm not a baby, I'm an adult.
I can't count how many times that particular piece of information I was looking for was hosted on an old website that's only accessible via HTTP (another thing security zealots don't want you to use) or had an expired certificate.
Let me take my risks and give me a way to disable your bike wheels, I'm not Google's son.
And seriously, the entire f*king HTTPS business cannot rely on a non profit USA org, sponsored by all the usual suspects.
That analogy is a bit off because the certificate problem is on the supplier's side, not the customer's. A more apt analogy would be "no you can't see the doctor today, because their passport expired yesterday".
Sure, in a perfect world we shouldn't need debuggers, seatbelts, emergency services, insurances and many other things, but that's not how reality works.
Well, the debugger is a good example, because it’s also not something that concerns the end user, but only the developer.
In the same way, certificate expiration is a problem that can easily and with 100% reliability be handled by operations. A end user should never be confronted with it.
Browsers could add the less severe warning before the certificate expires, for example 3 days before it expires have a "this certificate is about to expire, are you sure you want to continue?" warning. That would maintain the security guarantees around expiration while still getting the attention of users/administrators.
Isn’t that what happens now when the cert expires? Except when it’s expired it’s a lot harder for users to figure out how to bypass the warnings so they can visit the site to find a contact link to report the issue.
Remember not every site is actively checked by their maintainer every day.
In an ideal world it shouldn’t be needed (and likewise UX for expired certs wouldn’t be needed), but in practice I think it has merit.
I know all the stories about MITM attacks, but the fact is, it is still MUCH easier to accidentally or purposefully log unencrypted http:// traffic in a passive manner than it is to actively spoof a HTTPS:// connection.
Especially for local LANs, but also for small websites, there should be a way to use TLS with a self-signed cert to say hey, I'm not making any strong claims of identity or privacy here, I just want some modicum of obfuscation of the traffic. Also, a user should be able to trust a specific cert once on first visit, and then be warned only if that cert changed.
Also, the fact that the entire world's infrastructure relies on some small, centralised non-profit in the USA (LetsEncrypt) makes me very nervous. Ordinary citizens can end up on the wrong side of sanctions through no fault of their own...