I applaud the effort to hate on "smart" middleware proxies!
That being said, author gets no points for namedropping random distributed systems algorithms and using tcp keepalives (2 hours minimum!) as an argument against TLS terminating proxies.
Is there a reason to (as he says) "fully implement the protocol" in the proxy? I battled with websockets through Pound last week, and it simply doesn't work because the author took a non-postel stand on protocol specifics.
Having a protocol agnostic proxy like hitch (previously stud) fixed that without losing functionality, and I expect it to age better as well.
Sadly the internet isn't as nice. I've been to various hotels where the router would silently drop keep-alive packets (but god forbid informing the packet layer you do this!) and mangled DNS packets ("looking for mail.example.com? Here is the answer for example.com" and "looking for doesnotexist.com? Here is result for internalsearchengine.com which redirects you to a sponsored search page with ads")
Even encrypted DNS will suffer because middleboxes with captive portals will attempt to tamper it.
Unless you pipe it over TLS or HTTP in which case you run into problems with not knowing why there is no connection in a captive portal (we obviously need to fix captive portals, they're source of 90% of problems)
I learned recently about RFC 7710 which specifies a DHCP (v4/v6) and RA option for "You're on a captive portal, here's the website you should visit before you get access": https://tools.ietf.org/html/rfc7710
Do any of the major implementations of captive portals support it?
RFCs are never anything other than proposed standards, or informational documents. There is no point at which an RFC becomes a “recommended standard” or some such thing.
All internet standards, from IP to HTTP, are proposed standards, it doesn’t actually mean anything about whether they’re generally implemented or not.
Not that I know. My pfSense firewall doesn't have it (IIRC), so my guess would be that poorly maintained router boxes in a hotel basement definitely don't have it.
I'm not sure if the various DHCP clients communicate this properly to the OS or browser even (I wouldn't know how to query for it on Linux)
"TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others." -- Jon Postel
Hi jamwt. We haven't forgotten your(?) excellent work on stud!
Both changes.rst and the man page explain where Hitch came from.
Hitch has seen significant changes since we forked stud 0.3.2,
for example proper reload/sighup support, an improved configuration format, and OCSP stapling. Running it on a large scale (cert/ip -wise) also works better now.
If you're ever in our parts of the world, let us know and we'll buy you some beers/coffee/$beverage and tell you all about how your old project is doing.
The reason is pretty simple: LibreSSL isn't available/packaged on the distributions we care about, and we don't have the will, money or knowledge to do it ourselves. (with my VS hat on)
We are positive to merging any code changes necessary to get it running with libressl though.
How do you think the CPU usage would be with 10kB buffer sizes? And since we're throwing numbers out in the air, why stop at 10kB? If we reduce to 1k, that should give us MUCH MOR connections!!11.
Let me ask a leading question: how much of this do you think is openssl overhead?
Please consider optimising for a real usage scenario, not some fantasy benchmarking setup.
I am not picking my numbers randomly. On a x86/x86-64 Linux kernel, one socket (one connection) will use at least one 4kB physical memory page. So if userland also allocates one or two 4kB pages for its own needs, you need at minimum 8 to 12kB per connection. That's why I quoted ~10kB.
The minimum theoretical memory usage is 4kB per connection: 1 page in kernel space, and nothing on the userland (eg. you use zero-copy to txfer data between sockets or to/from file descriptors).
From reading the launch posts it seems the real advantage is it allows Varnish Software to bring SSL under the same roof and offer commercial support for it i.e. more a business advantage rather than a technical one?