Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Caddy is the first and only web server to use HTTPS automatically and by default (caddyserver.com)
161 points by aabbcc1241 on Sept 12, 2023 | hide | past | favorite | 182 comments


I believe Caddy brought a much needed paradigm shift in web server space, it is an incredible piece of technology.

I have moved all my servers from NGINX to Caddy for the pass few years and I couldn't be happier.

Also, I would like to give a shoutout to the team behind Caddy. They have been nothing but great about constantly shipping updates and being incredibly helpful in their community forum.


Caddy is amazing, but on production machines remember to disable the unauthenticated and enabled by default JSON-based admin API bound to localhost:2019, as it can be a serious security risk in certain deployments.

Put the following in your Caddyfile at the lowest scope to disable it:

  {
      admin off
  }


PSA: systemctl reload caddy will call this api. If you disable it, then reloading the server will no longer work.


Too good for sighup? I deplore an API when signals should work perfectly fine


Signals don't work on Windows. We want a unified API for all platforms. Also signals don't carry arguments, which is necessary to push a new config. At runtime, Caddy doesn't know where the config came from because config is just data.


Apologies in advance, I'm not trying to be mean spirited or too critical. I'm limited in my ability to express at the moment, on mobile.

The API can still be there, I'm just asking for better integration where feasible. Signal handling on Linux/similar

It's silly to tell my init process to go out 'to the network' to do something it can do directly against the child.

I would not expect turning off an admin API to effectively limit my way to administer the process.

Services will generally ordain a path for a config, overridable with arguments. The same file used then is what is re-read on reload.

Argument/command line changing during a reload isn't a thing, that's restarting. We give it config files as an argument (or implicit default) so that can be reloaded.

It's uncommon to start a process with one file, decide you want a new file path, but keep the PID.


See https://news.ycombinator.com/item?id=37482096, the plan is to switch to unix socket by default on certain distributions of Caddy.

But it won't be possible to add signals support. We've thought hard about it but it's simply not a fit. There's discussion in GitHub: https://github.com/caddyserver/caddy/issues/3967


Sockets will definitely be appreciated! It leaves some corner cases I can imagine, but it's definitely a step in the right direction.

ie: admin interface disabled, I can't reload to bring it back... because that depends on it.

With sockets we gain a permission model; one simply being in the 'localhost' scope can't do funny/scary things - either a user or another service on the system.

Thank you for the discussion, I'll give it a read - have a meeting then I can finally use my computer to 'catch up'


Earlier versions of Caddy were just a single binary that accepted signals to reload; the newer versions add a bunch of process management stuff that just got in the way of our existing tooling (...why remove the signals? ugh!) so we just switched back to Nginx.


The only signal we don't support anymore is USR1. (It's not a powerful-enough API for config reloads.) That was why you switched your entire web server stack?


This default behaviour makes sense. If you're going to be using it for hosting in production, then read the documentation, and it's trivial to disable. If I recall, AWS has a similar default for some services. That is, access to the subnet (VPC) gives you full access to the attached service, no password required.


Disagree, to a degree. It's fine to offer this for extended use cases (ie: restarting from a second, trusted, host)

It would be more appropriate to handle signals, particularly SIGHUP. That's how most services have been handling reloads.

It's fine to offer an admin API, especially if I want a peer to be able to affect the local instance, but this shouldn't be the position init is placed in.

Put simply, the init process is what we depend on if everything else fails.


And mongo, and many others packages with insane defaults.

What if rm would by default just delete everything, as it assumes that makes sense? Stupid comparison, I know, also a stupid default.


Do you recall which AWS services are like this? Thinking I better check a few things!


If you have any old S3 buckets, listing the index used to (in relative terms quite a long time ago) be enabled by default.


Thanks!


At least, have it respond only to authenticated requests. Caddy supports client certificate authentication:

https://caddyserver.com/docs/json/admin/remote/access_contro...


Caddy also supports Unix sockets, which should be rather more difficult to smuggle requests to, and can be protected by file permissions:

    admin listen unix//var/run/caddy/admin.sock


This (if they definitely must leave the functionality enabled by default) is what should be the default honestly. I still can't fathom why that isn't the case!


Caddy maintainer here: we're looking to move to unix socket by default for Linux distributions. See https://github.com/caddyserver/caddy/issues/5317, the plan is to set this env var in the default service config but I'm trying to be careful about backwards compatibility so I haven't pushed the change for our deb package yet. Will likely do it soon.


I'll see about getting it made the default for the FreeBSD port at least.


I would imagine so the default behaviour could be identical across platforms.


I imagine it's for Windows users. But yes, it could very sensibly be the default in Unix.


While you are right, remember there are usually additional layers of security (and if not, there should be). On the network level, you would only allow ports 80/443 to reach the machine. And if you use a containerized deployment, you would only expose 80/443 as well.


If your application can be used to make outbound requests to the internet (and so many apps can be), you can easily make a GET against localhost. There are ways to lock that down, but they aren’t automatic.


Just another weird and stupid default waiting to be exploited.


Works on localhost. It is not a big deal.


This is how trivial bugs turn into full-fledged threats. Increasing attack surfaces without any justification is bad cyber security.


If you're hosting your applications on localhost it can be a security risk.

A blind SSRF vulnerability (with payload control) in your application could be used to gain full control over the reverse proxy resulting in the attacker gaining full unfettered access to your network.

If you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it, see: https://owasp.org/Top10/A05_2021-Security_Misconfiguration/


It is absolutely a big deal. Any server software should be secure by default, period.


If your server reaches out to user-provided URLs, it can be a big deal. Especially with DNS rebinding, remote users can bind domains to 127.0.0.1. Which avoids cors like protections.


We mitigate both DNS rebinding and cross-origin in the admin endpoint by verifying Host and Origin headers -- by default.


Alternatively don't serve your site over HTTP at all. Just redirect to HTTPS.

Edit, I just checked the Caddyfile for one of my sites. There is no config for redirecting HTTP to HTTPS is does it automatically. So this is entirely unnecessary.


No what I'm talking about here is the unauthenticated JSON-based configuration API that hosts itself on port 2019 on localhost of the machine that runs Caddy.

This is unrelated to sites hosted using HTTP. I was clumsily using the term "HTTP" to refer to the fact that this configuration mechanism is based on HTTP-communication.


So if I understand this correctly, anyone can bring down a site with a Caddy server by just running : curl -X POST "https://example.com:2019/stop" ? [0]

Seems counter to their objective of having secure defaults.

[0] https://caddyserver.com/docs/api#post-stop


No, by default it listens on localhost. So only processes running on the same machine can connect to that port.


Okay that makes sense. So why would you bother disabling this thing?

I'd imagine if someone already has local access to the server, it's already too late.


Not really. If someone logins as user A on the machine, and caddy runs as user B, then unless A has sudo access, A cannot modify caddy. But with this admin HTTP endpoint, user A now can arbitrarily modify caddy.


This does kind of beg the question, who is sharing their load balancer / reverse proxy?


That's true, but I think if your production web server is running on a system that you expect to have other users log into and do things on while having the Unix permissions prevent them from interfering with the production server, then your whole architecture and process is deeply broken far beyond the ability of any Caddy design decisions to address.


That's another really good point, even if it's less common these days to see this type of shared machine.


Most people would expect that `sudo` and `curl localhost:2019` are very different permissions, that is, curl with post payload `-d '{"admin":{"remote":{"listen":"0.0.0.0:2019"}}}'`, and you'd only have to convince an existing process to make the request.


In some cases, localhost is not just accessible from localhost :) https://unit42.paloaltonetworks.com/cve-2020-8558/

Also SSRF risks as mentioned elsewhere ...


SSRF in an application is a serious issue to have on its own, that's true, but in combination with a Caddy admin endpoint it can be used to give an attacker full access to your local network.

You could have a blind SSRF vulnerability in an application and while that's not great, it is difficult for an attacker to exploit successfully.

If the attacker knows or guesses you're hosting Caddy on the same machine, they know you most likely have an admin interface on localhost:2019 that they can use to make further local network requests and also makes it possible for them to access the results of their local network requests they were making through the blind SSRF vulnerability hypothesised above.

Basically, if you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it, see: https://owasp.org/Top10/A05_2021-Security_Misconfiguration/


> Basically, if you're not using it (and you shouldn't be using such functionality on a production machine), then you don't need it and should disable it

Actually, most everyone wants zero-downtime config reloads. The API is necessary to perform config reloads.

As others have said, you may use a unix socket instead for the admin endpoint. And see https://news.ycombinator.com/item?id=37482096, we plan to make that the default in certain distributions.


> The API is necessary to perform config reloads.

Of course it isn't. It could reload the config from the same path it loaded the config from in the first place. Like practically all other software has done for decades.


The source of a config doesn't necessarily need to be from a config file. Config loading is abstracted. So it requires input, and signals provide no way to pass arguments, so it's not workable. See https://github.com/caddyserver/caddy/issues/3967


This sounds like a design decision you've made, not an inherent limitation. You can read config from files, like practically all other software has done for decades.


I get the thing about config reloads, I don't think it's worth it due to the security risks of the current default, but I get it.

Happy to hear you're moving to sockets by default on *nix!

However, I'd like to point out that the default should be in the binary, not in the distros default environment variables, otherwise it won't reach people who build their own binary, and depending on how you start your Caddy server you may clear environment variables for that process, and end up with the insecure HTTP-based admin endpoint enabled by accident.


The only default that works on all platforms is a TCP socket. We can't write to unix socket file by default because the path to the socket needs to be writable, and there's no single default that has any guarantee to work. So it needs to be dictated by config in some way or another. It's better for it to actually work by default than possibly not work because of a bad default.


So detect the OS and choose the more secure default where possible? I know it's less elegant, but having a much more secure model is worth some sacrifices.


It's not only the OS, it's the environment. File permissions are not a guarantee, no matter the OS.


Then you throw an error in the log, you have to leave something for the admin to do to set their system up correctly. It's better that Caddy fails to enable the admin endpoint than that it enables it in an insecure manner.


You're overestimating the users; a large % of them would not understand how to resolve that on their own and would complain to us that they can't start Caddy without errors. And I fundamentally disagree that the TCP socket is so insecure that it must never be used as a default, it's only insecure if your server is otherwise compromised. It's a sufficient default for 99.99% of users.


Said large percentage of users will be installing through a package manager anyway, where you can make sure that Caddy has a path the user it runs as can write to.

If you're correct that I'm overestimating users then what are you guys doing? You're expecting users to know how to secure their Caddy configuration when in reality most users probably have no idea that this API even exists, they'll put their config in Caddyfile, start the server, and be done with it.

We should be expecting that they don't know anything about the risks involved with leaving an unauthenticated HTTP API on localhost, and instead shipping a default that doesn't place their system and network at unnecessary risk.


> Said large percentage of users will be installing through a package manager anyway

Exactly, which is why the environment variable approach is perfectly fine. The env var will be set in the systemd config.

> You're expecting users to know how to secure their systems

Again, our view is that the TCP socket for admin is secure enough for 99.99% of users, and has been for over 3 years since Caddy v2 was released. We've still not seen any evidence of a practical exploit in the wild.


You should disable it if you don't need it or at least move it behind authentication if you do need it.

Security follows the Swiss cheese model: each individual measure has known limitations but by layering them, you reduce the overall number of attack vectors.

Getting the server to make arbitrary HTTP requests is bad, yes, but limiting what the attacker can do with that makes it less dangerous if you somehow screw that one thing up.


No, it is only bound to localhost.


Their filebrowser (originally part of caddy, since split out)[1] is a pretty nice tool for serving web-based browser of your NAS files.

I'm also a huge fan of Caddy's "handle" and "handle_path" directives for their simplicity.

One thing I will say against it though is that it does seem to run a little hotter than Nginx on my Pi4. Just random spikes here and there, whereas Nginx barely used to blip.

1: https://filebrowser.org/installation/


I've been using this for years but never knew it was a spin-off from Caddy. I really like it!


Technically it was shipped as a plugin for Caddy back then, it wasn't actually part of Caddy proper. Now they ship it standalone and recommend to reverse proxy to it. IMO it would be nice to still have the ability to include it as a Caddy plugin, but alas.


Yeah! Henrique Dias did great work with it. He should be very proud of his project.


Just a personal experience. About 6 months ago, we moved from NGINX to Caddy on our web app, which handled about 300 million HTTP requests per month at that time (2 web servers, so about 150 million each)

CPU Usage:

with NGINX - 15-20% with Caddy - 70-80%

I tried multiple tweaks but nothing helped to get NGINX-level performance. So, after a few weeks, we migrated back to NGINX.

That being said, I still absolutely love Caddy and use it in a few small scale apps.

- The DX it provides is amazing. - Creating a PHP-FPM reverse proxy is just a couple of lines. - Generating SSL certificates on the server is a breeze. With NGINX, you have to mess with other software like certbot. - It just works :)


Thanks for your feedback!

I'd love to capture a profile next time you have a chance. We've been primarily focused on features until just about 6 months ago, so we have started making significant optimizations only recently.


Love to hear it!


Well yeah, NGINX is a highly optimized C application, whereas Caddy is written in Go, so it would be unfair to expect NGINX-level performance. Caddy is more modern and has more helpful features (that are easier to implement thanks to Go), but performance-wise... OTOH, if you use NGINX to serve a PHP or Node app, Caddy serving a Go app should be competitive ;)


It’s definitely fair to expect performance to be within one order of magnitude. 4 is really unreasonable.


4x worse performance is within one order of magnitude. An order of magnitude would be 10x worse.


That’s only if you’re using base 10. Base 2 is a perfectly acceptable order of magnitude.


Then it would still be only 2 orders of magnitude, not 4


Do not feed the troll.


Have you used caddy with HTTP/3? The quic-go version shipped with v2.6.0 wasn't tuned for optimal performance.


Does nginx's configuration complexity mean anything in the age of ChatGPT?

I set up a whole reverse proxy stack on my personal webserver (thanks to the swag docker image: nginx + auto-renewing let's encrypt + fail2ban). I hadn't really done any web stuff before, certainly nothing on the public Internet.

I didn't have time to read the nginx doc so I had ChatGPT do most of it. I'd then ask for changes depending on the challenges of a specific webapp (for example, not requiring basic_auth on certain routes that were using the tool's own auth).

I realize this is a simple example, but if I was able to achieve all my goals quickly with zero prior experience, surely it can't be that hard for someone who does this stuff for a living.


ChatGPT recently recommended rm -fr as the appropriate flags to delete files with confirmation.

Good luck with such setups. Best case they fail, worst case they become security nightmares.


I did benchmark it against nginx and found Caddy to be 5-7x times slower, but like all benchmarks go...results are subject to ones requirements (or mistakes).

What got me away from using it:

- the directives feel intuitive but as soon as I needed a complex config it all became a chain of very implicit strings

- the caddy author(s) decided few years ago to add custom http header with their sponsors[0]. That header could not be removed, it's no longer present in current Caddy but the bad taste still remains.

[0] https://news.ycombinator.com/item?id=15238315


My problem is that Caddy does not ship an X-Clacks-Overhead header by default.


I tried to submit the Caddy configuration for this to www.gnuterrypratchett.com, but looking at it, it doesn't seem like it was ever added to the site.

The configuration is simply:

Header X-Clacks-Overhead "GNU Terry Pratchett"


I don't think they are really competing with nginx. They are just in the sweetspot between convenience vs performance.


I just keep using HAProxy for doing the plumbing and keeping app side as simple as possible (which is often just "a web server builtin into app" + maybe static serving nginx if app is in slow language that can't handle serving statics quickly)

But automatic https does look convenient, no need to have separate certbot running


HAProxy 2.8 has improved[0] Let's Encrypt integration by using acme.sh (so you can get rid of certbot). It still needs a cron job/systemd timers to do renewal of certs but acme.sh is just bash script so you don't need (extra?) system deps for installing it, while certbot requires Python and 12 python libs on my system (Fedora). And nowadays the "recommended" way to get certbot is via Snap (package manager)...

[0] https://www.haproxy.com/blog/haproxy-and-let-s-encrypt


Haproxy is great in the end but it is pretty awful to work with. Once you've toiled in the mines of acl commands, going back to nginx/caddy is a breath of fresh air. Unless recent versions have completely changed the game, using haproxy when you don't have to is a massive time waster.


We use Caddy in production. The driving force for change was built in automatic https. It just simply works.


I'm happy to hear this -- we work hard on our auto-HTTPS features!


You can't expect a program in Go to compete with Rust or C++.

When they say it's "fast" they mean relative to something like Python.


I don't think language comparisons make sense here as a ton of performance is application dependent. So when it says it's "fast" it should be "fast" compared to many but not necessary all alternative similar software ignoring any language (which it probably is, some of the alternatives are not fast even through they are written in C AFIK).

Through in this case I would say it being "fast enough" for many use-cases is the relevant part.


I was under the impression GCed languages aren’t necessarily slower than non-GC. Rust still cleans up after itself (RAII and destructors). The difference is in latency, predictability and perhaps total memory use; but not actual speed, I thought.


The part you're missing is secondary effects of designing a language around GC. Most of the time, that means that everything is heap allocated. So even if a GC and RAII-style management did the same amount of work (and they generally don't, GC's often do less when allocating, for example) while doing allocation/deallocation, non-GC'd languages tend to allocate less in the first place. Additionally, if we're talking about overall performance, indirection can be quite bad on cache locality, so it's not even purely about the speed of allocation/deallocating, but about pointer chasing.

Some GC'd languages also offer tools to manage these problems, of course, all I mean to say is that there are a lot of factors at play here.


> it's no longer present in current Caddy but the bad taste still remains

Adding a sponsor header is harmless (albeit useless IMO). For me that would be no reason to not choosing this software, and certainly no ground for having a 'bad taste'.


It is bad taste, unprofessional.

Taints the pool with the vibe of "We can do this; we will do this and you schmucks can't do anything about it because it's in the EULA".

Regardless to how harmless it is. It's still an unprofessional quality to implement such and then deny the ability to disable it.

Same example as if when you honked your cars horn it tooted the car model. You'd be annoyed right?

Sure, its harmless because how often do you use your car horn but you expect a car horn to horn not advertise the model your driving.


I'm imagining a Tesla's horn chiming the Intel "dah dah ding ding" thing now.

Let's hope Elon doesn't read this.


What purpose does it actually fulfil? Who is actually looking at individual HTTP requests like this? All it does is take up extra traffic...

It's also a security risk if your web server is the only one doing it, as it is a way for an attacker to fingerprint the web server software in use.

I understand and agree with melx's view here completely, even if I do feel Caddy's strengths outweigh it's weaknesses.


> Who is looking

devs presumably


Which is exactly the audience we were targeting.

I thought it was a good idea at the time. ¯\_(ツ)_/¯


I think it’s a cool idea, though if it’s on by default, disabling it should be obvious like one of the first lines in the config file.


> Adding a sponsor header is harmless

Harmless or not - I think it's worth looking past this point. Maybe the http header was a way for them to search the internet and find *commercial* sites that didn't pay for Caddy license? Not very pro behaviour.


Caddy is licensed under the Apache license, which allows for commercial use. No one is infringing by not paying for the commercial version.


Well, we're in 2017 (read the thread I posted above), and in that year Caddy was distributing[0] the binary as licensed product. Apache licence applied to its source-code only (e.g. when you build the server from source - which only few people did).

[0] https://web.archive.org/web/20180216153020/https://caddyserv...


Thanks for the background. I didn't know it used to be like that. It's a funky licensing scheme if I ever saw one.


I love caddy, I used to litter my docker-compose.yaml files with Traefik labels like:

    labels:
      - traefik.enable=true
      - traefik.http.routers.foundryvtt-http.entrypoints=web
      - traefik.http.routers.foundryvtt-http.rule=Host(`vtt.xxx.nl`)
      - traefik.http.routers.foundryvtt-http.middlewares=foundryvtt-https
      - traefik.http.middlewares.foundryvtt-https.redirectscheme.scheme=https
      - traefik.http.routers.foundryvtt.middlewares=foundryvtt-auth
      - traefik.http.middlewares.foundryvtt-auth.basicauth.users=${foundryvtt-BASIC_AUTH}
      - traefik.http.routers.foundryvtt.entrypoints=websecure
      - traefik.http.routers.foundryvtt.rule=Host(`vtt.xxx.nl`)
      - traefik.http.routers.foundryvtt.tls=true
      - traefik.http.routers.foundryvtt.tls.certresolver=mytlschallenge
      - traefik.http.services.foundryvtt.loadbalancer.server.port=30000
Now I just add the containers (by name), no labels, and map Caddy to their port, like so (in the Caddyfile):

    data.xxx.com {
         reverse_proxy projectsend:80
    }
or, this snippet refers to a WordPress container with BasicAuth in front of it:

    restricted.xxxx.com {
            root * /var/www/html/restricted.xxxx.com/wordpress
            php_fastcgi wordpress-xxxx-restricted:9000 {
             root /var/www/html
                    }
            basicauth /* {
             xxx $xx$x05xxxxxxxxx.xx
            }
            file_server
    }
Here's just an index.html (from Hugo in this case) in some dir:

    blog.xxx.nl {
     # Set this path to your site's directory.
         root * /var/www/html/blog.xxx.nl
     # Enable the static file server.
     file_server
    }
I love the simplicity.


That says more about shitty trend of using labels as a config than anything else.

12 factor app did untold damage to the industry convincing smart-but-inexperienced developers that key-value-only systems are somehow good way to configure anything more complex than "this app needs server, password and user"


Well, apart from the labels, I'm also happy I got rid of that middleware stuff that I still don't fully understand.

I mean, I want https, and I want it in front of my standard docker container that listens on some random port. Caddy requires me to enter only exactly what I need, no more (container name and port and required function (rev-proxy), 2 lines, boom).


Remind me of essentially moving Java code to XML with Spring. Typical platform-in-a-platform syndrome.


Wait? Caddy can do all that?

I really need to grab a few beers and check out the documentation, my current docker setup is a cargo-cult traefik label monster I just copied from a friend =)

It works, but I really don't know why or how


Heck, you could extend it with Caddy Docker Proxy and go right back to the labels-as-configuration method.

https://github.com/lucaslorentz/caddy-docker-proxy

I actually do this, because I kinda like having the proxy config right next to the app config in my Compose file, but I also dislike how much manual configuration Traefik needs. Downside is you need to know how to write Caddyfile (easy enough) and then also know how to write labels so CDP translates them into the correct Caddyfile (also easy enough, but could be annoying if you're learning both at the same time). Upshot is that once you know how it translates and you know what you need to write, it works just like Traefik but with just two labels, and I think that's pretty neat.

Caddy can support a surprising amount of weird and wonderful configurations, too.


I do this at home. The labels are generally much simpler than traefik.

labels:

  caddy: subdomain.${DOMAIN_NAME}

  caddy.reverse_proxy: "{{upstreams 8000}}"


Caddy had been a joy for me personally coming from NGINX. I especially love the ease of adding a new site and how little config it takes. Small self plug, I recently wrote an article about some cool config examples https://jarv.org/posts/cool-caddy-config-tricks/


For local development, I don't think nginx/apache even contend against it. The config is incredibly readable, and caddy is a single executable with no dependencies.


Also for production, it's such a joy to be able to configure a VM with a single Caddyfile (or worst case, provision the server using the Caddy REST API). I love it!


Same here.

And, having mostly worked with Rails, PHP and Python http services, the proxying webserver is hardly ever a performance issue. You can stick the slowest webserver in front of a Rails (Rack) app and still be unable to measure the latency it adds.

I've been using caddy for years and while it's slower in benchmarks, than others, I've never had the practical situation where that mattered.


If you mean running caddy on the same VM / server as the upstream, I suspect you're only incurring nanoseconds of extra latency.


Well if you never build anything that does gigabits of traffic yeah, doesn't matter how slow it is


Comparing it to nginx is a such a low bar; 9 out of 10 masochists prefer nginx over any other tool.

But I see and agree with your overall point. Caddy is like a breath of fresh air, and just as useful!


It’s funny to read this, remembering when Nginx was considered the fresh air in comparison to Apache.


It’s funny because I remember when Apache was a breath of fresh air to httpd.


Backing the days (5+ years ago I think) we tried it, and while it was nice its licensing/pricing made us not use it for our startup, as it seemed to pose a sustainability threat when growing (or not growing the right pace).

Has it improved since then?

disclaimer: I don't remember the details, I was just told to use nginx because caddy is problematic, so I built the system with nginx open source.


This got me curious (and a bit worried) so I checked, and it seems like it might have been differently licensed for commercial use around the time you mention, but that no longer appears to be the case: https://caddy.community/t/caddy-license-for-commercial-use/1...


We had the same issue, and between that and the author's attitude towards the community feedback at the time meant I simply won't look back and got more and more familiar with NGINX.


> Backing the days

eggcorn or autocorrect?


Neither. Lack of coffee. Won't correct it now :D

(English is not my mother tongue, which is a phonetic language, Hungarian, and while I know the difference very well, being tired causes these kinds of typos sometimes)


I (native language German) sometimes make similar homophone typing errors. My brain knew what I wanted to type, but is already further ahead, by the time (100-200ms) I get to typing it, my fingers only remember the sound of it and sometimes write something different that sounds similar (like to and two) or the same. It’s weird.

Not sure if I’d make those mistakes in German as well, as I write far less in it ;)


I do some handwriting (for journaling), and when I'm really tired, I sometimes also make these types of mistakes in Hungarian handwriting, though far less often than in typing. Probably we are always in a hurry, and should slow a bit down, and think twice (or more thoroughly) when composing text (or even in other aspects of our lives).


I like Caddy, mainly for the ease of configuration. One thing that surprised me though: by default it has compression disabled. I discovered it 1 year after moving from nginx (with the nginx config having compression enabled) and it was funny, because at the time of migration I got comparable performance out of the two. Obviously, after enabling compression, it’s faster now.


We don't enable compression by default because we leave it up to the site owner to decide whether to optimize for network efficiency or CPU efficiency. (Also, disabling implicit things is more tedious and confusing than enabling things.)

Glad to hear you use Caddy! :)


To add onto mholt's reply, we avoid implicitly enabled functionality where possible (the only feature that's implicitly enabled generally is Automatic HTTPS, off the top of my head). By default, Caddy's HTTP handlers do nothing which is good because it gives you a clean slate to build on top of. If you had to turn off features to reset back to zero, that's cruft. For example, think of CSS resets, which are needed to turn off styles added by browsers by default; that's annoying and a complication that all websites need to deal with to get consistent behaviour.


Great approach. User agent default style sheets are the worst, especially since they are all different.

But, instead of handing the user a empty slate with nothing, it should contain a note saying “look, these are the recommended options”. If the initial config is interactive, it could even prompt to activate them: “want to use compression?” - “yes”, etc.


Those recommendations are in the documentation.

Interactive configuration is easier said than done. We don't realistically have the time to build and maintain that on top of the core program and config. We're already stretched pretty thin.


I'm always a bit bothered by them saying they are the "only" web server that can do this. First you can also just configure it in a way where it will not use HTTPS (e.g. if you provide an IP:port instead of a hostname). And if you do require specific configuration to enable HTTPS and automatically get certificates via ACME, then lots of other web servers can do this too. Even my own web server can do it: https://github.com/pfirsich/htcpp (see https://github.com/pfirsich/htcpp/blob/main/configs/acme.jom... for an admittedly much more complicated config).


The distinction is that Caddy is the only webserver* that enables HTTPS by default. All others don't attempt to enable HTTPS by default, they start with HTTP and you need to add config to make it use HTTPS. Nor do they enable ACME by default. With Caddy you only need to tell it your domain name and it'll do the rest.

* popular webserver anyway; we can't reasonably count yours with only 6 github stars that we've never heard of :P


> if you provide an IP:port instead of a hostname

That still gets served over HTTPS.

The only time HTTPS isn't used is if a host portion is missing entirely (IP or name).


Or if you explicitly prefix the hostname with http://


True, you can definitely turn HTTPS off explicitly.


I didn't even know about that. Thanks!


Caddy seems to be continuously getting better and I think mholt occasionally hangs around here and is a rather pleasant person.

I recall once needing to help a new person in another team setup TLS after they had tried to do it unsuccessfully themselves in some configuration (that might have had a networking setup where HTTP-01 for ACME doesn't work, actually). I just started recording my desktop, grabbed a server from Hetzner live and thanks to Caddy could give them an example of how things should work with all of the steps in like 5 minutes total.

Nowadays I use Apache (mod_md) for my personal needs due to some plugins I need, it actually makes me wonder why Nginx doesn't seem to have integrated support for ACME yet, even if certbot is serviceable too. Either way, props to Caddy for raising the bar for web servers.


I love Cadd and I have been using it for a few years now. The documentation is kinda crap, but I still use less time to get things done. Most projects are just copy/paste of old config files. Everytime I had a problem, I asked and got an answer within 2 days. And nice answers, not like Stackoverflow...


Thanks for the kind words.

> The documentation is kinda crap

We continually get comments like this, but we rarely get elaboration on why people think this. Please explain what you mean. What's crap about it? We spend a lot of time improving the docs, and without specific feedback we're surprised to hear this.


That's because Francis, Mohammed, Matthew, and our other helpers are awesome. They are volunteers and do it because they like to help and find the project interesting. Thanks for being a part of the community


I'm currently reverse proxying a few docker containers with nginx. Caddy seems tempting but one dealbreaker I can't find in the docs is whether or not it automatically refreshes its DNS cache if a docker container restarts and changes its IP address?

e.g. In nginx, I use "resolver 127.0.0.11 valid=30s" so "proxy_pass {container}:80" will only cache the {container}'s IP address for 30s


From my experience I’ve not had any issues with Caddy using stale DNS entries when proxying Docker containers.

From the forums it looks like Caddy doesn’t explicitly define any DNS behaviour, it relies on Golangs defaults, which in turn simply uses whatever the host provides. I.e. whatever IP your host DNS resolution returns is used, and Caddy doesn’t cache internally, it relies on your hosts DNS cache. It’s reasonable to assume that any modern OS respects DNS TTL, and for something like Docker it’s gonna be doing a lookup on every request (which should be pretty much instant, as everything is on the same machine).

https://caddy.community/t/proxy-dns-resolver-mechanism/5934

https://stackoverflow.com/questions/40251727/does-go-cache-d...


Perfect, that's exactly what I'm looking for


If you want a slightly heavier but more robust solution, caddy-docker-proxy[0] is a plugin that listens to the Docker socket and automatically updates the Caddy configuration based on Docker labels you add to containers.

I.e. it makes Caddy act a bit more like Traefik. Most of the time, you'll just add the label `caddy.reverse_proxy={{upstreams http 8080}}` to your containers and the plugin will regenerate Caddy's configuration whenever the container is modified.

[0] https://github.com/lucaslorentz/caddy-docker-proxy


Caddy was the first to default to https.. because it was new. Nothing special about that.


I think you meant the Let's Encrypt was new. They started on November 18, 2014, and Caddy's first release was on 28 April 2015.


Because Caddy was new.


It's still the only popular webserver to default to HTTPS. Others default to HTTP first, and require you to add more config to enable HTTPS.


The thing I am missing the most is some kind of HTTP-01 by proxy, like https://github.com/acmesh-official/acme.sh/wiki/Stateless-Mo...

If DNS-01 is not an option or to complicated, this saves you from exposing a host to the internet for no good reason.


I'm a little wary of copying things from acme.sh after I discovered a 0-day RCE in it.

Could you open an issue to discuss your requirements? We'll take a look at solutions.


The most hassle free way to reverse proxy with Docker. I love it.


Big fan of caddy. We use it internally and our company provides financial support to the developers.


FusionAuth is awesome :D

Thank you for your sponsorship!!


Maybe first but not the only! https://github.com/donuts-are-good/appserve


Only popular though! We can't reasonably consider projects with only 8 github stars as part of marketing statements.


Popular? I can do a backspin without spilling my beer. Get the caddy guys on the phone :)


It's not meant as an insult. When writing marketing copy, you need to consider mindshare, and I can't really say that webservers without much of a community are worth considering. Especially when we didn't know they existed.


That is pretty impressive, I'll admit. :)


Just curious, why use autocert instead of CertMagic?


It's what I was familiar with, and it works great. That doesn't mean I can't be swayed by something better though. What features make you choose certmagic over autocert?


Makes sense -- I will often go for what I'm familiar with, too.

Aside from creating CertMagic, I think CertMagic has a few benefits over autocert. It is designed to scale to thousands of certificates. It will staple OCSP for you automatically. It can obtain certificates "on demand" during handshakes. It is more robust to failures and can keep sites up even when other ACME libs let you down. CertMagic supports all challenge types; I think autocert only does TLS-ALPN challenge, which requires port 443. There's a lot of other improvements and enhancements that autocert is lacking IMO. A scan of the readme should help illustrate: https://pkg.go.dev/github.com/caddyserver/certmagic#section-...

But I guess use what works for you!


Nothing wrong with another tool in my chest, I'll give it a shot! Thanks for the referral.


>All hostnames (domain names) qualify for fully-managed certificates if they:

-are non-empty

-consist only of alphanumerics, hyphens, dots, and wildcard (*)

-do not start or end with a dot (RFC 1034)

Someone help me understand this part...didn't know this


I'm not sure I understand the question. What's unclear exactly?


Caddy is great. My only complaint is they insist on sending a:

    Server: caddy
Header that is impossible to turn off since it's hardcoded here: https://github.com/caddyserver/caddy/blob/master/modules/cad...

The developer's annoying response is "it doesnt improve privacy or security, so we won't give you the option to remove it".


It's not impossible to turn off. NGINX does this too, but you have to recompile NGINX to disable that header.

With Caddy, you just need:

    header -Server
in your config.


This doesnt work for http redirects to https. I couldn't find any way to disable the server header in those responses without patching.


They insist on adding it to the standard response path, but they're happy for you to remove it:

    header -Server
However as this isn't global configuration it'll tend to pop back up in implicit configs like HTTP redirects and error handling if not overridden.


Is it possible to disable it on http redirects? I haven't found any way to do that


Just curious. I use Ubuntu on my servers (as many do) and I deploy everything as standard Systemd service (even my apps). However, when I wanted to try out Caddy, I realized they don't provide you with one, so you have to write your own scripts putting systemd config files, enabling the service etc. Is this what everyone does these days? Seemed kinda strange for mainstream software.


That's really more of a packaged/non-packaged thing, though, isn't it? Like, I once installed caddy on a CentOS (RIP) system by running `yum install caddy` and that did give me the right systemd config out of the box, which I wouldn't expect to get from a out-of-tree binary I just plopped on the system.


But they do provide systemd unit files:

https://caddyserver.com/docs/running#unit-files

Or are you saying they should be included with the download?


It's just so strange. They provide two systemd files, you need to choose one, edit (!) it, put on the machine etc. Comparing with nginx's boring apt-get install that does the right thing this feels like non-mature way to distribute things. Especially for people who script and document all of their server setup procedures.


Take a look at the Debian/Ubuntu install instructions. “Installing this package automatically starts and runs Caddy as a systemd service” https://caddyserver.com/docs/install#debian-ubuntu-raspbian


All right, totally missed it then, thank you


I love Caddy, using it as a Reverse proxy (even in docker) is so nice and easy. All you need is 2 lines of config:

:2080

reverse_proxy :9000


Free certs is also just a one line "email myemail@example.com". This applies to all sites and everything is auto managed.

It also has a nice one line shortcut directive which applies to the vast majority of php sites out there.


Actually, you don't even need the email for certificate automation. You only need to give Caddy a valid public domain as your site address.


Great. Either something changed, or I took something optional as required when I started using it. (Or perhaps one of the cert providers needs it while the other doesn't?) Good to know anyway. Cheers.

Here's some older instructions for zerossl where email seems to be necessary. (https://caddy.community/t/using-zerossls-acme-endpoint/9406)


For that you can even just use the command line without a file:

    $ caddy reverse-proxy --from :2080 --to :9000


If you just need that you don't really need reverse proxy in the first place...


This way the app doesn't have to handle HTTPS though.


But you have 2 apps running instead of one. There are already apps (and libs for them) to built-in the letsencrypt directly into the app.

Sure if it is a 3rd party app, but if you're writing one just adding it to your app and simplifying everything around it is worth compared to adding additional component to deploy


Easiest way to use HTTP/3!!


How does Caddy compare to Nginx Unit? Is the API easier to use?


You can feed it JSON. I use the Caddyfile, but I found the documentation well done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: