Hacker Newsnew | past | comments | ask | show | jobs | submit | lkarsten's commentslogin

4ksb


I applaud the effort to hate on "smart" middleware proxies!

That being said, author gets no points for namedropping random distributed systems algorithms and using tcp keepalives (2 hours minimum!) as an argument against TLS terminating proxies.

Is there a reason to (as he says) "fully implement the protocol" in the proxy? I battled with websockets through Pound last week, and it simply doesn't work because the author took a non-postel stand on protocol specifics.

Having a protocol agnostic proxy like hitch (previously stud) fixed that without losing functionality, and I expect it to age better as well.


Sadly the internet isn't as nice. I've been to various hotels where the router would silently drop keep-alive packets (but god forbid informing the packet layer you do this!) and mangled DNS packets ("looking for mail.example.com? Here is the answer for example.com" and "looking for doesnotexist.com? Here is result for internalsearchengine.com which redirects you to a sponsored search page with ads")


And this is why encrypted DNS is a must.


Even encrypted DNS will suffer because middleboxes with captive portals will attempt to tamper it.

Unless you pipe it over TLS or HTTP in which case you run into problems with not knowing why there is no connection in a captive portal (we obviously need to fix captive portals, they're source of 90% of problems)


I learned recently about RFC 7710 which specifies a DHCP (v4/v6) and RA option for "You're on a captive portal, here's the website you should visit before you get access": https://tools.ietf.org/html/rfc7710

Do any of the major implementations of captive portals support it?


RFC 7710 is only a proposed standard.

It has been implemented on a few captive portal solutions[1] but as far as I know it is not understood by any client software.

[1] https://github.com/coova/coova-chilli/pull/274


RFCs are never anything other than proposed standards, or informational documents. There is no point at which an RFC becomes a “recommended standard” or some such thing.

All internet standards, from IP to HTTP, are proposed standards, it doesn’t actually mean anything about whether they’re generally implemented or not.


Actually, they do become standards:

https://www.rfc-editor.org/standards#IS


OH yeah, that started happening recently, I forgot about that.


Not that I know. My pfSense firewall doesn't have it (IIRC), so my guess would be that poorly maintained router boxes in a hotel basement definitely don't have it.

I'm not sure if the various DHCP clients communicate this properly to the OS or browser even (I wouldn't know how to query for it on Linux)


If you're using NetworkManager you can get DHCP options by being mildly angry at the D-Bus API:

    $ python3
    >>> import dbus
    >>> bus = dbus.SystemBus()
    >>> nm = bus.get_object("org.freedesktop.NetworkManager", "/org/freedesktop/NetworkManager")
    >>> conn = bus.get_object("org.freedesktop.NetworkManager", nm.Get("org.freedesktop.NetworkManager", "PrimaryConnection", dbus_interface="org.freedesktop.DBus.Properties"))
    >>> dhcp = bus.get_object("org.freedesktop.NetworkManager", conn.Get("org.freedesktop.NetworkManager.Connection.Active", "Dhcp4Config", dbus_interface="org.freedesktop.DBus.Properties"))
    >>> options = dhcp.Get("org.freedesktop.NetworkManager.DHCP4Config", "Options", dbus_interface="org.freedesktop.DBus.Properties")
    >>> str(options["subnet_mask"])
    '255.255.255.240'
I guess you can parse /var/lib/dhcp/dhclient.*.leases otherwise?


For reference: https://en.wikipedia.org/wiki/Robustness_principle

"TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others." -- Jon Postel


If more software would take an apostel stand, I feel we would have net fewer interoperability problems.


By that you mean if more software was more strict about what it accepted instead of being liberal about it?


Yes, that.


Hi jamwt. We haven't forgotten your(?) excellent work on stud!

Both changes.rst and the man page explain where Hitch came from.

Hitch has seen significant changes since we forked stud 0.3.2, for example proper reload/sighup support, an improved configuration format, and OCSP stapling. Running it on a large scale (cert/ip -wise) also works better now.

If you're ever in our parts of the world, let us know and we'll buy you some beers/coffee/$beverage and tell you all about how your old project is doing.


Pssst, jamwt!

Don't let them buy you beer.

Insist on tasting their homebrew instead.


HTTP/2 is on the way.


On linux netem can help you do something similar. It is also faster, better tested, has more features and so on.

I guess an argument can be made about the user interface, as tc is a nightmare. :-)


The reason is pretty simple: LibreSSL isn't available/packaged on the distributions we care about, and we don't have the will, money or knowledge to do it ourselves. (with my VS hat on)

We are positive to merging any code changes necessary to get it running with libressl though.


How do you think the CPU usage would be with 10kB buffer sizes? And since we're throwing numbers out in the air, why stop at 10kB? If we reduce to 1k, that should give us MUCH MOR connections!!11.

Let me ask a leading question: how much of this do you think is openssl overhead?

Please consider optimising for a real usage scenario, not some fantasy benchmarking setup.


I am not picking my numbers randomly. On a x86/x86-64 Linux kernel, one socket (one connection) will use at least one 4kB physical memory page. So if userland also allocates one or two 4kB pages for its own needs, you need at minimum 8 to 12kB per connection. That's why I quoted ~10kB.

The minimum theoretical memory usage is 4kB per connection: 1 page in kernel space, and nothing on the userland (eg. you use zero-copy to txfer data between sockets or to/from file descriptors).

At Google, our SSL/TLS overhead per connection is 10kB: https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


Thanks for the data point.


Yes, doing hard crypto for all users has costs. Welcome to the real world. :-)


Advantages are that it is faster, and that it is a small and simple program that does a single thing well.


From reading the launch posts it seems the real advantage is it allows Varnish Software to bring SSL under the same roof and offer commercial support for it i.e. more a business advantage rather than a technical one?


Congratulations, you have rediscovered a problem described in a 14 year old RFC: http://tools.ietf.org/html/rfc3135


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: