Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah there’s no way this is a Caddy nor nginx raw performance but ridiculously poor backend coding.

At a glance the issue the difference though is caddy reverse proxy defaults to keepalive and nginx upstream does not. Enable keepalive in nginx and you’ll see similar performance.



Seems like nginx should update its default settings. Can’t think of a reason to not make keepalive the default.


It introduces an availability risk if the connection reuse times don't exactly match keepalive times on the backend. Plus an increased risk to be vulnerable to a HTTP request smuggling issue if anyone discovery a new and unpatched one.

That said I agree that using keepalive seems like a reasonable default - especially since other HTTP clients do the same.


nginx is less opinionated and tends to err on the side of less resource utilization than Caddy by default. There are a myriad of situations where keepalives aren't appropriate, from the client not supporting them to not anticipating the request throughput to hold TCP connections open.


It will likely require more resources. No keepalive on a backend connection also means you need to potentially perform another TLS handshake for each request - which is immensely expensive. The cost of idle TCP connections compared to that is pretty small (maybe 100kB RAM per connection, and no CPU cost).

I agree that there can be drawbacks - e.g. Nginx trying to use a keepalive connection while the backend already closed it due to badly aligned timeouts. That's a potential availability issue which can only be avoided by careful configuration. Client not supporting keepalives shouldn't be an issue, since Nginx is the client here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: