Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GNU Radio 3.9 (gnuradio.org)
99 points by lukastyrychtr on Jan 18, 2021 | hide | past | favorite | 53 comments


OK, so anyone more familiar with this project, how possible would it be to make my own home network streaming service for OTA radio?

Like, I think it'd be fun to have something like an http server or whatever in my home that I could hit something like "streamer/99.9fm" and get an opus audio stream for that station.


You don't strictly need GNU Radio for something like that. A cheap RTL-SDR dongle and the `rtl_fm` command could work.


I played around with this a bit last year when I first got my HackRF. It's got quite the learning curve if you, like me, are not already knowledgeable about RF and SDR.

First step for your desired use case would be, I think, to acquire an SDR module that can receive broadcast FM and then it would simply be a matter of having your server command the SDR to tune to 99.9 MHz and then stream the result to the network.


Somewhat unrelated but does anyone have any good resources for learning GNU radio and SDR for someone who’s mostly and idiot. I took a single calc course in HS (which I hardly remember) and have some very limited digital communications knowledge from watching a few videos. I know what I want to eventually build and have a nice SDR, but I don’t know how to use GNU radio. Most of the blocks I see outside some rather specific modulation ones seem to be primitive operations.


You may find Michael Ossmann's SDR course [1] helpful - it builds the knowledge from the groud up and uses GRC for implementation.

[1] https://greatscottgadgets.com/sdr/


Unfortunately that's a guide for older GNU Radio versions. It will give new users quite a bit of trouble at this point with all the namespace changes since 3.6. But back in 2015 it was a great way to learn.


I quite like Marc Lichtmann's (who happens to be on the GNU Radio board) https://pysdr.org quite refreshing and good for beginners.


What is the current "easy" tooling/workup for GNU Radio ?

It used to be that you could download a GNU Radio livecd and that worked well - however it is no longer maintained. The latest release is from 2017 and I see it written that it is neither supported nor recommended.

Is loading the distro of my choice and then managing and installing GNU Radio, plus associated utilities/applications, and their dependencies my only choice in 2021 ?


PyBOMBS is probably the best way if you want to be able to handle out-of-tree modules. If you just want to play around with the built-in functions, using a prebuilt binary from a package repo is the easiest.

https://github.com/gnuradio/pybombs#pybombs


depends on your linux distro, but on current debian, ubuntu and co, you get 3.8.x.x using `apt install gnuradio`, fedora uses dnf, and so on.

On windoofs, `conda` got your back.

So, it's really gotten better. Yay!


I have nothing substantive to contribute here but it's always nice to see a GNU project with a decent website. Presentable, clean, modern. Good stuff. My only quibble is that it's very slightly low on contrast.


Make a bookmarklet with this:

javascript:var%20x%20=%20document.querySelectorAll("div,p,li,a,hr,em,font,strong,h1,h2,h3,td");var%20i;for%20(i%20=%200;%20i%20<%20x.length;%20i++)%20{%20%20%20%20x[i].style.backgroundColor%20=%20"white";x[i].style.color%20=%20"black";}undefined;//alert("Done");;

It changes the background to white and the text color to black. I use that on worse websites than the GNU Radio one.


Interesting trick. When I find that a website's design would be improved by removing it, I tend to go with Reader Mode.


A related note/question: why do projects like this choose github over gitlab? It's become a weird world to me, where a GNU project could ethically choose github.

To be clear, I use github often as well, but it seems odd to me that a GNU sponsored project would.


gitlab is just another company; you probably mean a self-hosted gitlab server?


Hi: GNU Radio is not GNU-sponsored. Whatever that would be. Sure, we could put archives on Savannah, but I can have as many slow servers as I want...


They should at least push it to both (maybe blocking issues/comments/etc on one of them).


They moved away from SWIG and to PyBind11. As they mention, this is going to be an annoying transition for my custom out-of-tree modules. Overall though, I'm excited for the changes.

Lots of new GUI blocks in this release. I'm especially glad to see the new eye diagram.

Also the ability to load non-WAV audio files will be nice. OGG/Vorbis and FLAC are now supported. Plus a slew of minor conveniences such as a freq shift block and scaling options in the IShort to Complex block (useful for loading recorded samples from hardware).

If only it would compile easily on my Mac. (https://github.com/ktemkin/gnuradio-for-mac-without-macports has a prebuilt 3.8 version)


I have a general question about these SDRs: what is the latency between when the radio wave "hits" the dongle, and when it is detected by the software framework?


Buffers, buffers everywhere... It depends on the sampling rate and technology (USB, ethernet, direct PCI), but in general 1msec is achievable, 10microsec is not achievable.

That's why BladeRF-wiphy ( https://news.ycombinator.com/item?id=25814237 ) implements the lower level phy on FPGA - there's no way to reply an IEEE 802.11 ACK frame within 10 microseconds of the end of the received frame, as required by the standard.


Depends on your buffer size, bandwidth, and what you're demodulating/decoding, but it's pretty fast. With a narrow bandwidth on a fast port and simple modulation/encoding, maybe a few milliseconds?


Counter-intuitively, it is often difficult to reach low latency with "narrow bandwidth" (you mean low sampling rate), because many SDR interfaces are designed to fill full packets with data, not send tiny packets (ex: Ethernet packets). This problem goes away above 1-10Msps.


A few ms as others have said, but it's also a non-deterministic delay.

This is a problem for applications like time-of-flight measurement, so one way to account for this is to send a known signal on TX and look for it on RX


For what it’s worth, I’m convinced that whoever can make this a $99 box with a great app interface is gonna make a fortune.


[flagged]


Yeah, we received several reports that most gopher: clients can't access gnuradio.org, either.

Obviously, we'll reduce the security for all of our users to suite these who want to use a bleeding edge signal processing toolkit on a machine running internet explorer 6.

</sarcastic remark>


I hate to annoy you because I love GR and use it every day, but...

A potential HTTPS to HTTP downgrade attack on www.gnuradio.org is not reducing the security of all of your users significantly. The wiki with logins is on it's own subdomain which could remain HTTPS only. You distribute the code from github already. As far as I can tell www.gnuradio.org is just a press release site these days.


> HTTPS only is pretty restrictive and not relevant in the context.

This seems even more off-topic than complaining about the readability of a site's style. How is HTTPS restrictive and how does context outside of "being a website" make HTTPS excessive?


When older systems try to access an HTTPS only site they get blocked because there's no cypher overlap or no copies of the cert authority's new root cert (ie, LetsEncrypt's new one). Everyone realized just how much of a problem this was at the start of the pandemic when millions couldn't access HTTPS only government information sources and sites rolled back their security theater.

It's trading accessibility to cargo-cult security.


I've never heard of HTTPS being described as security theater.

What are the issues with TLS 1.3 that make it insecure or ineffectual?


Why 1.3? Most sites today are using TLSv1.2 at best, not 1.3.

One argument it is/was "security theater"[1] is that there were/are numerous programs that "added support for SSL/TLS" under pressure to conform but did not bother to add authenticaton measures such as hostname checking and validation, which was not even a function of the OpenSSL library until 2015.

1. Adopting SSL/TLS for accessibility reasons instead of security reasons.


It's security theater in the contex of HTTPS only on websites that aren't performing financial transactions. The idea that someone might figure out a way to do a TLS downgrade attack and force someone over to the HTTP site is not justification for HTTPS only in those cases.

I love TLS. I think it's the least worst system we have. But HTTPS only outside of it's proper context does a lot of harm and it's pointless. HTTP and HTTPS are two great tastes that go great together.


HTTPS-only is valuable for three reasons. Firstly it prevents MITM attacks that inject Javascript (or other) payloads into web pages which can be used e.g. for DDoS attacks. We saw this with the Great Firewall being used to attack Github for hosting material that was censored in China by hijacking an analytics script that was being served over HTTP[1].

Secondly, it makes passive surveillance and data gathering much harder. You can still track which sites a user connects to, but you can't see which pages they access or the contents of those pages. You can still actively intercept those connections via MitM but that requires users to manually install an MitM certificate on their system so they should be aware when it's occurring.

Thirdly, it makes it harder to intentionally block or break HTTPS altogether at the network level to force users to fall back to unencrypted HTTP. (TLS downgrade attacks are different from and much more difficult than, for example, just blocking all traffic on port 443.) If there's no unencrypted fallback, then users will complain loudly when they can't access sites.

If your main concern is old computers the solution is to use a proxy that strips the encryption.

[1] https://www.bankinfosecurity.com/github-hit-by-its-largest-d...


"If your main concern is old computers the solution is to use a proxy that strips the encryption."

Why not also use the proxy to add encryption when it is missing. Then "HTTPS-only" is not necessary. The decision of which scheme to use, http:// or https://, rests with the user, not the server.


Encrypting data after it has transited untrustworthy networks (which could have surveilled or modified it before it gets to you) is about as useful as closing the barn door after the horse has already escaped. The encryption (and authentication) needs to happen at the origin to get any security benefit.


I think you misunderstood. A localhost-bound forward proxy on the client side encrypts the traffic. For example, haproxy can be used for this purpose. It detects the presence/absence of SSL on connection and if absent it adds encryption before the traffic enters the network. Sorry I should have been more specific as "proxy" is a loaded term.


I must still not understand what you mean. Where is the encryption being added? Could you draw me an ASCII network diagram (showing the server, the browser, and the intermediate network hops) with an indication?


"Where is the encryption being added?"

It is being added by the proxy server listening on the loopback which connects to the remote website.

Browser connects to forward proxy on port 80, forward proxy (compiled with SSL library) connects to target IP on port 443.

This is how one can, e.g, use clients that are not SSL-enabled to access websites, etc. that require SSL.

For example, if forward proxy is listening on 127.0.0.1:80, we can make an encrypted connection to example.com using original *hobbit netcat which does not support SSL.

   echo -e 'GET / HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" |nc -vvn 127.0.0.1 80
It is probably more popular to use stunnel for this purpose instead of a forward proxy.


OK, now I understand what you meant about "a forward proxy on the client side" (as that's exactly what I mean by "use a proxy to strip the encryption"). But I still don't understand why that allows you to not have to use HTTPS-only on the originating server to get the benefits of HTTPS-only?


Because I the user am running a forward proxy to encrypt all outgoing HTTP requests, I do not have to rely on "HTTPS-only" on the server side. I enforce "HTTP-everywhere" on the client side. That's the theory anyway.

To be honest there are still some sites that do not, and will probably never, offer HTTPS and I have to account for those with the proxy setup. For these websites I might assign them a different local IP that does not add encryption.

In running this setup there are some times where I find that for one reason or another "HTTPS-only" on the server side has failed to catch every instance where http:// should be https://. I use many different clients, the least of which is the modern browser which may have some whizbang features to try to enforce "HTTPS-everywhere". The clients I use more are simpler, less complex and do not have such features. Instead of relying on the modern browser, I rely on an extensive proxy configuration to make sure everything gets encrypted (when appropriate).


http://n-gate.com/software/2017/ gives some pointers. I'll summarize them in my own words as:

1) a CA can issue an illegitimate certificate (gives four links where that's happened), and "cross your fingers and hope transparency and oversight works" for CAA records

2) difficulties of determining a competent certificate authority.

3) size still gives clues about content (see also https://news.ycombinator.com/item?id=14070130 ), so you still have some information leakage, which means you have to worry about your threat model to figure out if something like padding frames is worthwhile

4) speaking of threat model, do you really think your phone, or the phones of your users, don't include certs that lets the telco/government/etc. do #1?

FWIW, found that link from comments after jwz's recent posting about problems updating from letsencrypt using certbot on CentOS. - https://web.archive.org/web/20210118183806/https://www.jwz.o... .

FWIW, my web site has been around for 20 years, serving static files mostly containing a blurb about my services, my blog, and various writings and source code I've developed and released.

I still can't figure out why I should care to switch to https. I can't figure out any relevant threat model to justify the work or worry about possible failure cases.

Eg, "Let's Encrypt warns about a third of Android devices will from next year stumble over sites that use its certs" - https://www.theregister.com/2020/11/06/android_encryption_ce... . (Updated earlier this month: Let's Encrypt "developed a new certificate chain that will prevent incompatibility with these devices to allow more time for them to age out of the market.")

FWIW, my phone is 10 years old.


The reason to do it is because while HTTPS is not perfect (especially when it comes to certificates), the attacks against unencrypted and unauthenticated plaintext HTTP (as I outlined in my other post) are incredibly trivial in comparison. The main point of HTTPS-only is to make the attacks significantly harder and more costly to execute and easier to detect when they do happen. We cannot let the perfect be the enemy of the good. Are you comfortable with ISPs injecting ads into your blog? [1][2]

> FWIW, my phone is 10 years old.

I'm curious, which phone is this and what do you use it for?

The oldest phones I keep in somewhat regular use are a Moto X (8 years old) and a Moto E (6 years old) which I maintain service on using free service providers (which require I use their service at least once each month or my service will be discontinued). When I turn those phones on each month to place a call with them, they take at least an hour to download and install app updates before I can even start using them. On the Moto E I've had to uninstall most apps because the internal storage has been nearly exhausted just by installing the latest updates for the bundled apps. On both phones even just bringing up the keyboard after I select a text field takes multiple seconds and any kind of multitasking is out of the question. I could not imagine trying to use either of those phones on a daily basis.

[1] https://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-...

[2] https://www.reddit.com/r/india/comments/8ry1k4/does_your_isp...


I read https://news.ycombinator.com/item?id=25825188 but didn't understand how the threat models apply.

> Firstly it prevents MITM attacks that inject Javascript (or other) payloads into web pages which can be used e.g. for DDoS attacks

If someone nefarious wanted to MITM my web site, the easiest would be to spoof DNS so it went to some other host, and do the MITM that way. Your ISP could do that to you, no?

While more people could spoof the http connection than spoof the DNS, what's the threat model? If it's the Great Firewall then all users behind it are also using their DNS.

Which is why (as I understand it) you also need things like 'integrity' in your links in order to avoid the attack you describe. Not simply switching to https.

> Are you comfortable with ISPs injecting ads into your blog? [1][2]

They aren't injecting ads into my blog. They are injecting ads into your transfer of files from my site. Why are you using an ISP which does that?

> You can still track which sites a user connects to, but you can't see which pages they access or the contents of those pages

Again, what's the threat model? I'm pretty sure simple traffic analysis would be enough to figure out, based on download size and number of additional requests made, which pages are downloaded.

While there are ways to mitigate that, it's not as simple as moving to https to prevent anyone from figuring out what my users are accessing on my site.

So, do I give my users a false sense of security by switching to https? It depends on the threat model, doesn't it?

> I'm curious, which phone is this and what do you use it for?

Does it make a difference? It's meant to underline the point I made that not everyone has new hardware.

The question for you is, how many people and devices will I block by switching to https-only?

Will poor people using second-hand computers be able to access things? Will old Docker images with scientific projects on them stop working when the embedded certs age out?

I have an implicit promise that if you could access URL X using method Y then you will always be able to access URL X using method Y. Why should I break that promise.

To be sure, I also have a site behind https. Among other things, it serves pip-installable packages. Which is why different threat model than my static blog site. (It also doesn't have the a 20 year old implicit promise.)


> If someone nefarious wanted to MITM my web site, the easiest would be to spoof DNS so it went to some other host, and do the MITM that way. Your ISP could do that to you, no?

That's what certificates are for. While certificate authorities can also be compromised, most ISPs don't run their own so they'd have to get a separate organization to cooperate with them to do this.

> They aren't injecting ads into my blog. They are injecting ads into your transfer of files from my site. Why are you using an ISP which does that?

This seems like a very strange semantic argument. From the customer's point of view they are injecting ads into your blog. Many people are not technically-savvy enough to understand that it's the ISP that's putting the ads in and not you. And for many people in the US and I'm sure also around the world they may not have a choice in ISP for a given level of service (would you choose dial-up or satellite over broadband if the only broadband provider that serves your area injects ads on non-secured pages?).

> Again, what's the threat model? I'm pretty sure simple traffic analysis would be enough to figure out, based on download size and number of additional requests made, which pages are downloaded.

The threat model is pervasive passive surveillance. The IETF recognizes this: https://tools.ietf.org/html/rfc7258 HTTPS greatly increases the costs of such surveillance (you can no longer just look at the bytes on the wire, you need to closely examine the site the user is connecting to to correlate the amount of data transferred with specific pages on the site; for dynamic sites or sites using HTTP/2 this can be much harder).

> Does it make a difference?

Not to my argument, but I was genuinely curious what 10-year old phone anyone can consider usable today and the use-cases they have that still accommodate such old hardware. (I could see a basic non-smart phone still being useful, as long as the networks it supports are still active (which in the US will probably not be for more than another year or two)).


"That's what certificates are for"

Could you explain that? I thought that if DNS were spoofed then people could be redirected anywhere else, no matter what the certificate said. I thought that could be mitigated by CAA records, but that too could be spoofed, eg, by your service provider.

How do I prevent the government of China from MITM-ing access to my web site from someone in China, using a Chinese mobile phone with certificates pre-installed and automatically by the local Chinese telco?

"This seems like a very strange semantic argument"

I think you're making the strange argument. If I get a "free" cell phone which has a modified browser that inserted ads when viewing my web site, then from the customer's view those ads are on my site, right?

But of course there's nothing I can do about that. Nor can you.

So why place the responsibility on me?

"if the only broadband provider that serves your area injects ads on non-secured pages"

Have you not seen all of the ads for systems like SecureVPN?

"pervasive passive surveillance"

One of the pervasive passive surveillance attacks mentioned in the ietf link is traffic analysis, which I mentioned earlier. Can you guarantee that if I simply switch to https then the NSA could not use traffic analysis to figure out what users access from my static-pages, public-facing web site?

Do you seriously think the NSA hasn't automated something as simple as remembering download sizes for each page, for a large number of web sites, in order to infer things like this? So do my readers gain any additional security against NSA surveillance by switching to https?

Don't forget that my service provider (in the US) can be forced to reveal details and do tracing without telling me, including providing clear-text access to the logs.

Since I cannot defend against NSA surveillance on my readers, I have to choose a threat model where I can make a difference.

And I can't figure out who I should care to thwart, where https would make a meaningful difference.

Clearly there are web providers which can and should use https to protect privacy against non-nation-state actors. Passwords, money, and the ability to insert code without review are three things which change the balance.

But I don't see any benefit for my basic blog site that's been around for 20 years, and there appears to be a (small?) negative.


> "That's what certificates are for" Could you explain that? I thought that if DNS were spoofed then people could be redirected anywhere else, no matter what the certificate said.

This suggests you are not very familiar with how HTTPS works. I'll give you the short version, but I'd suggest you do a lot more reading into how TLS, public key cryptography, and certificate authorities work.

Every site that wants to use HTTPS (in a way that doesn't trigger massive warnings in the client browser) has to obtain a certificate containing (among other things) the domain names that are authorized to use the certificate, a public cryptographic key, and a cryptographic signature from a certificate authority. The job of the certificate authority is to verify the person they are issuing the certificate for (by signing it) have control over the domains listed in the certificate. (Lets Encrypt's main innovation was to automate this process using the ACME protocol[1].)

When you connect to a site over HTTPS the TLS handshake includes the certificate of the site and the handshake messages are signed using the private key corresponding to the public key in the certificate. This means that the client can verify the certificate is valid for the site they are connecting to (by matching they domain they are attempting to connect to against one of the domains in the certificate's list), the server they are talking to has the private key for that certificate (because the public key is embedded in the certificate which can be used to verify the signature produced using the private key), and that the certificate itself is valid because it is authorized by a certificate authority (by using the public key from the CA's root certificate, in the "root store" of the browser/OS, to verify the signature on the certificate).

Additionally, once the handshake is completed all further traffic is encrypted and authenticated using an ephemeral key negotiated during the handshake. This means that you can trust the data coming over the wire is always coming from the correct server and has not been tampered with. (And as a side benefit it will detect any bit errors that occur during the connection, even those that are not caught by the TCP checksum, so you can be sure that e.g. large file transfers are not corrupt due to network errors.)

In order to carry out a DNS spoofing attack against a site using HTTPS (with a valid certificate issued by a recognized certificate authority), you'd either need to steal the private key associated with the certificate (in which case you've probably compromised the target site deeply enough that you don't need to spoof their DNS) or you'd need to obtain a fraudulent certificate for that site that was nevertheless issued by a recognized certificate authority. Any CA that issued an invalid certificate is supposed to immediately revoke the certificate and if they mess up like that too often they run the risk of being delisted from browser/OS root stores (this has happened in the past). Additionally, Certificate Transparency logs (where a CA maintains a public log of all certificates they have authorized) and DNS CAA records (which restrict which certificate authorities can issue certificates for a given domain name) further make this type of mis-issuance harder to carry out and easier to detect.

> How do I prevent the government of China from MITM-ing access to my web site from someone in China, using a Chinese mobile phone with certificates pre-installed and automatically by the local Chinese telco?

If the device itself is compromised there's nothing HTTPS can do. However, that scenario is outside of the scope of HTTPS. The threat model for HTTPS is bad actors in between a trusted service and a trusted client wanting to surveil or modify the traffic as it transits their networks. It is extremely effective at that.

> If I get a "free" cell phone which has a modified browser that inserted ads when viewing my web site, then from the customer's view those ads are on my site, right?

Again, if the user's device is compromised that's outside of your control and is not your fault. However, if you do not use HTTPS then anyone in between your site and the end-user's device can monitor or meddle with the connection and you do have a choice to use HTTPS to avoid that.

> Have you not seen all of the ads for systems like SecureVPN?

That still means you have to trust the VPN service and their upstream providers not to do these things. With HTTPS you are guaranteed to have protection all the way from your server to the client device.

> Can you guarantee that if I simply switch to https then the NSA could not use traffic analysis to figure out what users access from my static-pages, public-facing web site?

Obviously the NSA (and other state actors) could almost certainly figure out what any given person was looking at even if they were using HTTPS. The point of HTTPS is to increase the cost of doing such a thing from near 0 to significantly greater than 0. You can no longer just slurp up the bytes as they pass through a middle network, you have to actively catalog the site in question and perform statistical analysis to get that information.

What HTTPS will protect you against are ISPs and other middle networks monitoring your web browsing activity to sell that data to advertisers. This is a serious enough concern that the FTC asked the major US ISPs to provide information on what data they do sell about their customers[2]. And that's on top of the ad injection scenario I've already outlined.

[1] https://tools.ietf.org/html/rfc8555

[2] https://arstechnica.com/tech-policy/2019/03/ftc-investigates...


> When you connect to a site over HTTPS

Right, but if DNS is spoofed then you haven't yet connected to the site, you've connected to another site, so none of the security your described is relevant.

How does https prevent DNS spoofing? As far as I know, it doesn't. You need DNS CAA records. Which your ISP can spoof just like it can intercept your unencrypted sessions.

> DNS CAA records (which restrict which certificate authorities can issue certificates for a given domain name) further make this type of mis-issuance harder to carry out and easier to detect.

Earlier you talked about people who couldn't detect that ads were being injected into their browser. When you say "easier to detect", are you meaning easier to detect by people who don't recognize that their ISP injects ads?

Is my readership able to recognize that their ISP is spoofing DNS for a MITM attack?

How do I, right now check if my ISP is spoofing my DNS to MITM my access to my https site? 'Cause I have no clue.

> However, that scenario is outside of the scope of HTTPS.

Which was my point - what is the security threat I should care about?

Look, I don't disagree that https has its place, and I'm fine with https on new systems. But again I ask what are the negatives? If I switch my site to https-only, how many people or devices will be negatively impacted?

Can you answer that question?

> The point of HTTPS is to increase the cost of doing such a thing from near 0 to significantly greater than 0.

Which doesn't require https-everywhere, only https on most/significant number of places.

> to sell that data to advertisers

I've mentioned that that isn't a threat model that I or (I believe) my readership cares about. But let's examine it.

I can ad a tracking pixel to each of my pages, right? And my readership won't notice it? And I could sell that information to advertisers? And I could put Facebook likes and other tracking methods in my pages?

And you're fine with that at a technical level, since https doesn't prevent it.

While it's touching that you what to ensure that I am part of the profit stream from any monetization of the contents of my site, I don't think my readership cares - I certainly don't - so it's not part of my threat model.

> that the FTC asked

This is the same government that allows cable monopolies and doesn't enact laws to prevent the ISPs from injecting ads into their downloads? Call my a crazy leftist, but I can't help but notice that this also strengthens the power of the big advertising companies like Google and Facebook by reducing the amount of competition, and that the FTC is looking at ISP rather than Google and Facebook abuses because the ISPs are relatively weak, making this a politically cheap action on their part.


> How does https prevent DNS spoofing?

My post explained in detail how HTTPS prevents DNS spoofing. If you do not understand any of the steps in my argument, please point them out. Otherwise, if you are unable to understand how TLS and certificates work that is not my problem and I ask you to please stop spreading misinformation based on your misunderstanding.

> When you say "easier to detect", are you meaning easier to detect by people who don't recognize that their ISP injects ads?

I mean easier to detect by people who know how HTTPS works, i.e. by technical experts. There are people monitoring the Certificate Transparency logs for domains that have certificates being issued by a different authority than they normally use and/or by an authority that does not match the DNS CAA record.

> How do I, right now check if my ISP is spoofing my DNS to MITM my access to my https site? 'Cause I have no clue.

It's extremely simple: does the lock icon in your URL bar have an error symbol? If no, you can be almost[1] 100% certain the site you are connecting to has not been spoofed because it has a valid certificate. If yes, you should assume you have no better (though also no worse) security than unencrypted HTTP. If you want to go further you can view the certificate your browser received and compare it against the certificate presented to other services running on other networks, e.g. the Qualys SSL Labs server test: https://www.ssllabs.com/ssltest/

> Which was my point - what is the security threat I should care about?

To me this is a question of whether you respect and value your user's privacy and security. Enabling HTTPS (and HTTPS-only) is so trivial today that there is no reason not to do it. HTTPS is just basic security hygiene on the modern web regardless of the "importance" of the data. ISPs and other middle networks have proven themselves to be untrustworthy over the years and authenticated encryption (such as that provided by HTTPS) is the only workable defense.

> But again I ask what are the negatives? If I switch my site to https-only, how many people or devices will be negatively impacted?

Virtually none. Even on fairly ancient Android phones (Android 4.1+), which are the least likely to receive software updates, you can install Firefox and get a modern TLS stack and root store. Pretty much every desktop and laptop PC made in the last 10 years can run the latest version of Windows or Linux with the latest version of Firefox or Chrome. Sticking with software so obsolete it cannot interoperate with modern TLS is a choice and the people making that choice should be knowledgeable enough to understand the consequences of that choice (including the workarounds they can do to accommodate that obsolete software, e.g. proxy servers).

> Which doesn't require https-everywhere, only https on most/significant number of places.

If you do not use HTTPS on your site then the cost of surveilling your site is 0 regardless of whether other sites use HTTPS.

> And you're fine with that at a technical level, since https doesn't prevent it.

Yes, I am fine with that because it means that I got exactly what you intended me to receive and I can be confident in ascribing that decision to you and to judge you by that. My point about ad injection was just to show a real-world scenario where an ISP or other middle network has tampered with the data being delivered. You say you are a leftist, what about if someone inject "alt-right" talking points into your blog? What if someone swapped your contact information for a spoofed version so they can intercept e-mails and other private communication intended for you? What if someone replaced code on your site with malware? What if someone injected a script into your site to DDoS Github[2]? These are all things that are possible with unencrypted and unauthenticated connections (i.e. plain HTTP). If you can't generalize from the examples I'm giving you then I ask you to engage your imagination more vigorously.

[1] The "almost" qualification is primarily to account for intentional MITM appliances used by whoever is running your network (which would require them to install their own MITM certificate on your client) or malware running on your client, as well as much more rare scenarios like mis-issued certificates or state actor intervention.

[2] https://www.bankinfosecurity.com/github-hit-by-its-largest-d...


It's taken a while because of work, and trying to research things took time.

I weep for the loss of information. I went to postings of mine from 10+ years ago, to see what would happens in an https-only. Very few of my external links still point to valid resources.

https-only increases link-rot. Of my remaining valid links, many are to non-https sites, because the field I'm in is so specialized and not monetized, so things hang on only because they don't need maintenance. Enforce https-only and they are effectively gone. Here's one - http://www.umass.edu/molvis/francoeur/levinthal/Levinthaltex... . No copies appear anywhere else on a DDG search, though it is in archive.org so not quite perma-death.

Here's a 15 year old code base still going strong http://weblogo.berkeley.edu/ (there's also a v3, also with http).

> If you do not understand any of the steps in my argument, please point them out.

You are correct that I know little about https. Here's the scenario I don't understand, which your explanation doesn't cover:

If I connect to https://MrRadar.server/some/url then at this point my browser knows nothing about it. The MITM server uses a self-signed certificate which expires in one second. My browser will accept it, right? And connect? So the MITM server knows what URL I've tried to connect to?

It then slowly responds with a redirect, taking 1 second, to the same URL (possibly using tricks to prevent the browser from a redirect loop). The self-signed certificate has expired, so the browser doesn't use it, and the second attempt isn't MITM'ed.

This goes through, successfully, right? And there's only a short time where I would see any lock icon? So I have to pay very close attention to see it. And if the attacker times it just right, so the https negotiation is better aligned to the one-second transition, that window is even smaller.

How does https prevent this sort of information leakage?

NOT THAT IT MATTERS, because my corporate site, with https, has only a couple of dozen pages and they are all public so simple traffic analysis lets anyone in the middle know what you are reading from my site. So I "ask you to engage your imagination more vigorously" and wonder if sometime switching to https gives people a false sense that people in the middle can't know which pages they access.

> then the cost of surveilling your site is 0

I believe you mean minuscule, since carrying out the surveillance and storing the data have a cost. Making use of the data has an additional cost.

> negatively impacted

As it turns out, I was negatively impacted yesterday. I made a typo in an email pointing them to my web site (think 'ycombintor.com' instead of 'ycombinator.com'). The quick response was that the site wasn't working. I tried, and got the "Warning: Potential Security Risk Ahead" because that alternate site's certificate was invalid. I spent 5 minutes trying to figure out if my hosting site had messed things up, before I got the followup email pointing out the typo.

Without https, it would have been clear that I got the wrong web site.

(While https, of course, doesn't protect against typo-squatting.)

> Sticking with software so obsolete it cannot interoperate with modern TLS is a choice

Shrug. Forcing everyone to use https is also a choice.

I also mentioned old software stacks. I work with scientific code, some dating back decades, and with little funding for maintenance.

I'm dreading what's going to happen at the end of this month when pip drops support for Python 2.7 causing link-rot to make some old projects no longer work, causing the associated papers to no longer be so easily reproducible.

Part of my work today was to get Docker containers set up with Python 2.7, etc., as a sort of refuge, so those tools would be at hand 5 years from now.

Your 'basic security hygiene' is my archival headache.

> what about if someone inject "alt-right" talking points into your blog?

You've already derided my objection to your characterization. I don't know why you think I've changed my opinion.

> What if someone swapped your contact information

I've already pointed out that I don't see that as a valid security threat.

I mean, someone could break down your door and attack you in the middle of the night - does that threat cause you sleep in a safe room?

Do I also have to worry about them bribing the ops people at my hosting provider?

> What if someone injected a script into your site to DDoS Github[2]?

You mentioned that before. Your [2] says: "in order to execute such an attack, the perpetrator has to create a persistent [cross-site-scripting] DDoS scenario on a very popular website, or ... have control over a very high-traffic proxy - in this case ... the Chinese government's network equipment,"

That is obviously outside of my threat model, in the same way I don't have a safe to store my wallet.

If someone could control the Chinese government's network equipment, couldn't they reroute a lot of https traffic to GitHub to DDoS it?


HTTPS-everywhere, along with constant monitoring and reporting of certificate chains by the browser, are designed to protect against QUANTUM attacks [1], which, ~10 years ago, was being scaled to support million of simultaneous attacked devices.

It is not cargo-cult security.

[1] https://en.wikipedia.org/wiki/Tailored_Access_Operations#QUA...


HTTPS Everywhere (client side option) is great. HTTPS only (server side option) is not.


HTTPS only is a "Fail Closed" system, ie it blocks access in case of failure. This is safe for the general population.

HTTPS/HTTP mixed support is a "Fail Open" system, ie it allows (unencrypted) access in case of failure. This is unsafe for the general population, see QUANTUM (above).


You can argue for wearing a bulletproof vest at home if you're an iraqi nuclear scientist. But for most people it doesn't make sense and does more harm than good.

In the same way, HTTPs only, *requiring* a system that "fails open", is bad for the general population. HTTP+HTTPS, yes, definitely. HTTPS only, no, only for sites and contexts where the rigid security is justified.


Yes context is key. Its ok for wikipedia to fail back to HTTP but not for bank.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: