I appreciate you sharing your first interpretations! I get the impression it’s not that clear what this is. Trying to position what this is / figure out where we go with it is tricky/going to be a journey.
While this is in the CI adjacent (and indeed I am a big dagger fan and certainly inspired by it), this predominantly lives in the CD realm instead.
In particular, it has helped us with something that I feel is missing in the GitOps space, which is the connective tissue between environments. Automating updates to app versions directly in repo and then bringing that altogether into a single dashboard where I can see what does my repo say the desired state of the world should be. Ultimately, we want to surface what the actual state is too.
We’ve hedged our bets a bit so far and left room for non GitOps CD to potentially slot in too. But not sure if we should just double down and be explicit and go hard on GitOps.
When you say a “why not…” you’re referring to like a Glu compared with X section right? That’s a good idea. I will add that!
We already support S3, Azure and GCS, as well as OCI (any compatible registry) as a source in the open-source server-side evaluator. So if you pop a deploy step to any of these sources from your Git repo, you can use them via the Flipt server process as a source of truth in production. Our server-side and client-side SDKs can source from Flipt in these scenarios.
But, we are keen to both explore skipping the Flipt server middle-man for client-SDKs, as well as make the publish step to these locations a simple configuration process in our UI. To avoid having to write things like GH actions to achieve the end to end result.
On this, we support publishing the state to object storage (S3, Azure, GCS) and to OCI as well in Flipt.
Flipt Open-Source can be run to consume from these locations. You can go as far as configuring a workflow to publish on push, so that you can combine our managed UI with any of these distributions methods through Git.
With any of these backends (including Git), we periodically fetch and cache data in-memory. Evaluations work on an in-mem snapshot. So temporary downtime doesn't propagate into your applications being unable to get flag evaluations.
Caching in this scenario isn't something I'd lean on unless you can invalidate or repopulate easily. I've used etcd in a feature flag scenario due to the speed of it's replication and it's ability to queried frequently without the need for caching.
Full version control, which can be collocated with other configuration for the rest of your system (thinks terraform or k8s manifests) means it becomes easier to build a picture of how your system was configured at a given point in time. Because you have a single history to walk.
Complexity of initial implementation was certainly one, as we developed it. It’s not the most well trodden path for this kind of problem (well trodden for other kinds of apps). Obviously it lacks things like relations and schema, that we have to build on top of data in flat files.
One thing is that running Flipt open source on your infra, means running replicas all sourcing from the same Git repo. They currently polls for updates and this means eventual consistency comes into play when you scale. We have plans to help mitigate that with cloud though (pushing updates from cloud to your self hosted runners).
You can get around most of the consistency problem by "scheduling" the change. So, if I know it is going to take 2 minutes to make the flag available to my entire infra, I can schedule it for 5 minutes from now (could even make this configurable—a "default feature delay") which moves the consistency problem to infrastructure clock-sync.
Feature flag state is still served dynamically through Flipt. Your code doesn’t have to redeploy for the changes to “become live”. That’s the main benefit.
Means you can experiment and target different cohorts with variants of your app without restarting processes everywhere.
Thank you. I guess I was imagining that the flags lived in your source code repo, and required a commit and push to update, thus triggering some CD build and redeploying your app anyways.
You're not wrong there at all. That is a very reasonable assumption and I think the default behaviour with most early CD pipelines. Every commit leads to a deploy event.
However, this can be changed, so that not all commits/pushes are treated equally during CD. Either by using rules to ignore changes to certain sub-directories / files or through having reproducible builds and skipping the process restarting parts when the resulting artefacts between two commits haven't changed (e.g. the digest of a docker image not changing from one commit to the next).
This is often an optimisation though, and takes time/effort to put in place.
Flipt is live tailing the repository and serving this dynamically to the clients.
The repo with flag configuration can be solely for flags, or alongside other infra configuration on in more of a monorepo. You decide how you want it setup.
Obviously if it is alongside code, you may to contend with CI in order to validate a change. But with the rules in CI or other monorepo tooling, what runs and when can adjust this behavior to improve time for configuration to become live.
Once a configuration change is integrated into a target branch in the repo, then it becomes readable for Flipt and servable once fetched.
While this is in the CI adjacent (and indeed I am a big dagger fan and certainly inspired by it), this predominantly lives in the CD realm instead.
In particular, it has helped us with something that I feel is missing in the GitOps space, which is the connective tissue between environments. Automating updates to app versions directly in repo and then bringing that altogether into a single dashboard where I can see what does my repo say the desired state of the world should be. Ultimately, we want to surface what the actual state is too.
We’ve hedged our bets a bit so far and left room for non GitOps CD to potentially slot in too. But not sure if we should just double down and be explicit and go hard on GitOps.
When you say a “why not…” you’re referring to like a Glu compared with X section right? That’s a good idea. I will add that!