After reading this entire post, I’m still left wondering what problem this solves for me, beyond fluffy promises of ‘everything is going to be better’.
At the very least I’d want to see a comparison with what we have now, to show me how this is better.
I get that I can try to explore more, but if I don’t get a compelling reason to do so after reading the introductory post, I’m not very motivated to do so
Any motivation I do have completely hinges on the words ‘from the creators of docker’, not on the merits of this particular product itself.
That's fair, it can be hard to find the right balance of high-level explanation and technical detail. We tried to solve this by tailoring different parts to different audiences:
* The blog post is more high-level. It describes the very real problem of devops engineers being overwhelmed with complexity, and the promise of a more modular system, but does not provide lots of details.
* The dagger.io website does provide more technical detail. For example it talks about the 3 most common problems we solve: drift between dev and CI environments; CI lock-in; and local testing and debugging of pipelines. It also features animated code samples.
* The documentation at https://docs.dagger.io go in even more details, and walk you through a concrete example.
We do feel that we can do a better job explaining the "meat" of the tool. But we decided to launch and continue improving it incrementally, in the open. If you have any specific suggestions for improvements, please keep them coming!
I run a CI application for Laravel developers (Chipper CI).
It turns out, the gap between "works locally!" and "works in CI!" is not negligible, especially when you're not sure "about all that server stuff".
Getting this working locally with a fast cycle time, and then being able to easily move that into a CI environment of your choice sounds exciting to me.
Furthermore, the majority of our customer support is "I can't reproduce this locally but it's broken in CI". Everyone blames the CI tool, but it's almost never the CI tool - just drift between environments. A way to debug locally is a killer feature.
Is it worth an entire, funded company? I'm not sure, but I'm excited for them to exist!
For Jenkins, you stage your own instance locally and configure your webhooks to use that. It's exactly as terrible as it sounds, and I never recommend this approach.
For Travis and Concourse (I think), you can use their CLI to spin up a runner locally and run your CI/CD yaml against it. It works "fine," as long as you're okay with the runner it creates being different from the runners it actually uses in their environment (and especially your self-hosted runners).
In GitHub Actions, you can use Act to create a Dockerized runner with your own image which parses your YAML file and does what you want. This actually works quite well and is something that threatens Dagger IMO.
Other CI systems that I've used don't have an answer for this very grating problem.
Another lower-order problem Dagger appears to solve is using a markup language to express higher-level constructs like loops, conditionals, and relationships. They're using CUE to do this, though I'm not sure if hiring the creator of BCL (Borgmon Configuration Language) was the move. BCL was notoriously difficult to pickup, despite being very powerful and flexible. I say "lower-order" because many CI systems have decent-enough constructs for these, and this isn't something I'd consider a killer feature.
I also _also_ like that it assumes Dockerized runners by default, as every other CI product still relies on VMs for build work. VMs are useful for bigger projects that have a lot of tight-knit dependencies, but for most projects out there, Dockerized runners are fine, and are often a pain to get going with in CI (though this has changed over the years).
My "workaround", if you can call it one, is to design things so they don't need the CI/CD server to get a build/test/deploy feedback loop. I should be able to do any stage of the pipeline without the server, and thus no code is committed until I know it is working. The pipeline is basically a main() function that strings together the things I can already do locally. If I need anything intelligent to happen at any stage of the pipeline, I write a tool to do it using Go or Python or something that I can write tests for and treat as Real Software. After fighting with this for many years, this approach has worked best for me.
I didn't dig deeply into the docs, but Dagger appears to be doing a multi stage pipeline locally. If that is the case, I wouldn't want that either. I use Concourse, which has very good visualizations of the stages, and if I used Dagger there, it would consolidate those stages into one box without much feedback from the UI. Also, with Concourse you can use `fly execute` to run tasks against your code on the actual server, without having to push anything to a repo.
Concourse also has `fly hijack` which is the baddest/funniest command of the decade. It's also very nice to use, instantly logging you in the remote container of a failed build so you can poke around an see what actually went wrong and try to run it interactively before fixing and re-executing. Much better than poking at things in the dark until you hit another issue...
At the very least I’d want to see a comparison with what we have now, to show me how this is better.
I get that I can try to explore more, but if I don’t get a compelling reason to do so after reading the introductory post, I’m not very motivated to do so
Any motivation I do have completely hinges on the words ‘from the creators of docker’, not on the merits of this particular product itself.