> With 4 concurrent requests, NGINX was able to run 100 calls/second, with a 90%-ile latency of 48ms. Under the same conditions, Caddy ran 400 calls/second, with a 90%-ile latency of 16ms.
If you are processing only hundreds of queries a second, then both are either seriously misconfigured, or the bottleneck is elsewhere.
Yeah there’s no way this is a Caddy nor nginx raw performance but ridiculously poor backend coding.
At a glance the issue the difference though is caddy reverse proxy defaults to keepalive and nginx upstream does not. Enable keepalive in nginx and you’ll see similar performance.
It introduces an availability risk if the connection reuse times don't exactly match keepalive times on the backend. Plus an increased risk to be vulnerable to a HTTP request smuggling issue if anyone discovery a new and unpatched one.
That said I agree that using keepalive seems like a reasonable default - especially since other HTTP clients do the same.
nginx is less opinionated and tends to err on the side of less resource utilization than Caddy by default. There are a myriad of situations where keepalives aren't appropriate, from the client not supporting them to not anticipating the request throughput to hold TCP connections open.
It will likely require more resources. No keepalive on a backend connection also means you need to potentially perform another TLS handshake for each request - which is immensely expensive. The cost of idle TCP connections compared to that is pretty small (maybe 100kB RAM per connection, and no CPU cost).
I agree that there can be drawbacks - e.g. Nginx trying to use a keepalive connection while the backend already closed it due to badly aligned timeouts. That's a potential availability issue which can only be avoided by careful configuration. Client not supporting keepalives shouldn't be an issue, since Nginx is the client here.
Backend connections are not keepalive with that nginx config and benchmark is ran on a shared digital ocean VM. I suspect the author is inexperienced with proxies, operating systems and benchmarks.
Has anyone else switched to Caddy and seen similar performance improvements? I've been blindly using Nginx for years but have no love for that configuration file..
I switched to caddy for my personal stuff quite a while ago due to the easy config. I just have everything in a single Caddyfile (TOML). No idea about performance though as for the one thing that needs it I use Envoy proxy.
Have always used nginx, but I'm interested in the newer alternatives out there like caddy, traefik or envoy.
Is there a up-to-date comparison somewhere? Any personal experiences? Would be interested to hear if there are any reasons to switch. Performance has not been an issue for me, but there might be other good reasons.
I'm switching from nginx to envoy for one specific feature: holding client connections while my backend is restarting. When we restart the backend, we drain & serve all existing connections, but refuse new ones, and I want the ingress proxy to hold those connections until the backend comes back online (1-4 seconds). For nginx, this feature is available only in their premium version ($1,500/yr per node).
However, once I chose envoy, I found a whole lot of other features we'll use such as better mirroring/logging on traffic, and dynamic reconfigurability. The main/only downside of envoy for me is that their config files have a far more tedious structure, and I'm basically programming in yaml again.
Currently the only weakness of envoy is the configuration that’s very much not designed for humans, but an automatic control plane.
A tool that could take something like a Caddyfile with good defaults and spit out an envoy config file would be magical and super useful for those of us who don’t run a large enough setup to have an automatic control plane.
Speed comparison... it's been a while since I've checked, but I remember hearing Caddy will perform at about 70% of the capacity of a fully tuned Nginx, at least as a reverse proxy. Don't remember where I read that. So Nginx will be a bit faster.
However, from what I hear, you will probably never run into a situation where you use all of that, at least in typical situations, because you'll probably run into RAM limits or CPU limits related to the SSL cryptography first. So both will probably be 'fast enough'.
Caddy will be easier to configure than Nginx. That's just because it has a config file built to be nice and easy to read.
There are probably more external tools built to work with Nginx, but you might not need those.
Don't know much about envoy or traefik comparisons, but from what I've heard, traefik is built for a little bit of a different purpose, mainly for containers. You'll have to research more into it.
Traefik solves some problems. Envoy solves other problems. Caddy, I can't seem why to use it but I am not familiar with what capabilities it comes with. There are other proxies as well especially in the Go community. We use nginx. If the features you need exists in nginx I would say there is no reason to switch it out. From a holistic point of view there is only very small performance gains in this area.
I would say Caddy is aimed at small to medium companies. You have a web app/site, no dedicated systems admin and you need a reverse proxy that does the right thing out of the box without you having to learn what the right thing is or how to configure it. Basically going from zero knowledge to a working setup can be done in under 30min and it will be safe.
Ok, fair, we use cloud products in those cases. Nginx, depending on what modules that are used. Like Openresty and so on can be complicated. But there are plenty of people out there that knows how admin it.
Caddy is more opinionated and has more features than nginx but consumes more resources and is generally slower. It's probably plenty for low/moderate traffic sites, but not always appropriate for high traffic. Caddy is a lot more friendly to use than nginx, and it's config and its defaults are IMO much more appropriate for modern websites. Caddy has all sorts of quality-of-life features than nginx either paywalls or does not have.
Envoy is a lot more programmable and configurable than nginx. If you find yourself templating nginx configs and refreshing configuration while live frequently, Envoy is a better solution. Envoy requires more upfront work, uses either crufty YAML config or a custom service catalog backend, but is better overall in high-configurability situations.
Can't comment on Traefik.
We work at high scales at $WORK so we've standardized around nginx, but if I were at a smaller company I'd definitely look seriously into Caddy. Then again, once configured nginx doesn't really need much reconfiguration so it's more of a one-time-cost.
There's a bunch of paywalled features of nginx, like healthcheck-based load-balancing, that caddy has out of the box. Caddy also (IIRC) has experimental support for QUIC and has a directory view. There's others too but I'd have to trawl the nginx docs for them and I'm being lazy. The healthcheck-based load-balancing thing is a big deal for us at $WORK because we ended up having to write our own code to handle this instead of use nginx to avoid paying for nginx.
Nginx puts some of the more nifty features behind a paywall, whereas it comes natively in the others.
One example of these is supporting SRV-records, which acts as glue for containerized workloads.
At Billetto we use Nomad and Consul in our cluster, and internally we expose services with DNS and are using SRV-records, so doing a "$ dig SRV +short _billetto-production-rails.service.consul" will return the local IPs and ports where those containers are running.
If you are processing only hundreds of queries a second, then both are either seriously misconfigured, or the bottleneck is elsewhere.