On some level, they're just providing an interface by which you can configure your tests, and a tool where you can send that configuration to run those tests.
So...Why YAML (plus a weak escape-hatch) and not something obviously Turing-complete for that interface?
1. Some fraction of the target audience aren't programmers. I know a lot of people who can look at a few YAML examples, play with a few requests in Postman, and produce valid-enough tests. I know a lot fewer who can write that in any "real" language.
2. There's an obvious "right" way to use a tool like this. Each scenario has certain properties spatially colocated, the scenarios are colocated, .... Turing-complete languages make it hard to expose an interface enforcing good, nice, safe patterns just via the type signature. That's a blessing and a curse (I'd much rather have an escape hatch, personally), but it's definitely a benefit for somebody.
3. If you don't "need" (it's, admittedly, hard to determine this up-front, and I think Turing-complete YAML comments are far worse than just using a Turing-complete configuration language) a Turing-complete language, reaching for one is asking for trouble. I can expand if you're curious, but going into the rabbit-hole of "principle of least power" should at least give you a few ideas to poke at.
4. Even amongst programmers, many more people know JSON, CSV, YAML, NDJSON, XML, or _some_ quasi-human-readable configuration language than those who know your favorite Turing-complete option. Look at neovim and their init.lua file as an example. How many active users actually know how to do anything meaningful with it? How many struggle even to add a few keybindings and a plugin whose README doesn't explicitly spell out how it interacts with their favorite plugin manager? Lua isn't a hard language, but diving into a totally foreign ecosystem takes time.
5. Suppose you have something Turing-complete in the middle. Now your hands are tied to that ecosystem. You have to bundle a language or re-write an interpreter/compiler. Alternative implementations are similarly constrained. Tree-sitter grammers are a pain-in-the-ass to write if you want to make your config files easy to edit in people's favorite editors. User bug-reports might be real bugs or might just be programmers making dumb programmer mistakes again. This point is kind of an elaboration of point (3), but there are real costs to full-blown languages.
Mind you, with YAML having a 1000+ page spec, never once implemented properly, never once fully understood by any one person, and countless Fortune-500 companies having major issues due to unexpected YAML heuristics, I don't think that would be my first choice of configuration language. It has the benefit of popularity because of Kubernetes and CloudFormation and that whole ecosystem though, so I can see the point.
You can simply run "glasskube install cert-manager" or in the future "glasskube install [your package]" and glasskube makes sure all the dependencies get transitively installed.
In addition our GUI is served locally from the client and just creates package Cars, which then will be picked up by the package operator.
I can simply run `helm install cert-manager` or `kubectl operator install cert-manager -n cert-manager --channel stable --approval Automatic --create-operator-group
` - both have dependency resolution.
Not trying to be negative here, trying to understand if there is value or functionality missing from established solutions, because it's not obvious to me.
Your feedback is definitely reasonable, I don't perceive it as negativity.
Let me elaborate on both examples:
1. Helm dependencies
As cert-manager doesn't have any dependencies this will work fine, but we saw in many other helm charts often a "bitnami postgres" or the kube-prometheus-stack included. These dependencies will be installed in the same namespace as the original chart gets installed. Which is not very transparent for the user. A classic example is the Bitnami Wordpress chart where the MariaDB chart is included. The MariaDB chart received a major update through a Wordpress chart update which resulted in a lot of broken Wordpress installations.
With Glasskube we try to separate these dependencies into separate packages, so we can update the MariaDB (Operator) independently from the Wordpress installation. So a Glasskube package will be able to request a database (CR) and if the CRD is not yet present in the cluster we will install the CRD provider (mariadb-operator).
2. OLM cli
If you only want to manage operators and are very skilled you can use OLM which will have a similar effect, but as I tried to explain with the Wordpress example, most packages will not purely be an operator but have some namespaces components (like deployments) and will require custom resources from operators.
We have not yet introduced this distinction, but will maybe add the two categories ClusterPackages (Operators) and Packages (Apps) which play together nicely.
Our aim is to not recreate the wheel, but give cloud native developers the possibilities to make sure operator dependencies are met and Kubernetes users a simple CLI und GUI tool to get started without the need of copy pasting some helm commands that somebody else will update with different values or who does not read the changelog of a helm chart to fully understand the consequences.
Happy to get more of your knowledge to make Glasskube a better product, as we are still in the technical preview phase.
I work at a recently IPO'd tech company. Oxide was a strong consideration for us when evaluating on prem. The pitch lands among folks who still think "on prem.... ew".
Looks like a cloud like experience on your own hardware.
As did some elements of my own company, but business risks like those are not for fledgling public companies. To be honest, right now anyone in a _public_ company advocating for it at this stage of development should have all of their decision making power removed if not outright be shown the door.
That goes double if it's your CTO...which is exactly what ended up happening with us.
I'm not saying "no, never", but clearly "no, not right now".
> Had to do a lot of work to get node utilization ... higher than 50%
How is this the schedulers fault? Is this not just your resource requests being wildly off? Mapping directly to a "fly machine" just means your "fly machine" utilization will be low
I think there’s a slight misunderstanding - I’m referring to how much of a Node is being used by the Pods running on it, not how much of each Pod’s compute is being used by the software inside it.
Even if my Pods were perfectly sized, a large percent of the VMs running the Pod was underutilized because the Pods were poorly distributed across the Nodes
Is that really a problem in Cloud environments where you would typically use a Cluster Autoscaler? GKE has "optimize-utilization" profile or you could use a descheduler to binpack your nodes better
DX might be better I suppose, since you don’t have to fiddle with node sizing, cluster autoscalers, etc.
Someone else linked GKE Autopilot which manages all of that for you. So if you’re using GKE I don’t see much improvement, since you lose out on k8s features like persistent volumes and DaemonSets.
How is Gitlab CI materially different from the jenkins model?
I find that the only difference is that it's YAML - so even harder to debug, and maintains the same model where you must re run an entire pipeline every commit to test functionality.
Yeah YAML isn’t ideal, but I personally found jenkins to be terrible to configure, not super well documented, missing features compared to an installation of gitlab (ex built in registry).
Also the ability to rerun, target, and influence builds themselves is better on GitLab as well I think.
I'm just skimming, but it looks like the docs are fantastic. I've spent some time with terraform internals, and this seems like a significant improvement for a dev looking to work with this codebase. Gives a great overview to get started. well done!
I'm not sure which docs you mean specifically, but most of the docs are fairly unchanged (other than trademark stuff) from the original repo, so if any of the docs improved, then to give credit where credit's due, the kudos should go to the Terraform Core devs.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
Ansible is great, but (imo) aged. Sure it's good for dealing with legacy hardware that cannot support terraform like state, but (imo) untyped yaml and excessive playbook runtimes turn into significant development drain as you scale.
Ansible solved a large problem (config management) before the kubernetes era, but containerization accomplishes the same goal for most applications before deployment.
Is there anything in Ansible that is susceptible to aging?
I mean, Ansible is a tool designed to apply idempotent changes on one or more computer nodes following a declarative specification, and that only requires ssh access to work. What is there to age?
> Sure it's good for dealing with legacy hardware that cannot support terraform like state,
What? Exactly what leads you to believe that anything in Ansible is tied to hardware, let alone legacy hardware? And what do you mean by "terraform like state"?
> but (imo) untyped yaml and excessive playbook runtimes turn into significant development drain as you scale.
I don't understand what you tried to say, and frankly your comment sounds like an AI-generated buzzword soup.
With Ansible you need to specify the configuration state you want your nodes to have, and you need to apply configuration changes in a consistent sequence. This means not only specifying the configuration changes but also the verification and validation checks. The extent of your playbooks depend on how extensive your configuration is.
> Ansible solved a large problem (config management) before the kubernetes era (...)
Your comment makes absolutely no sense at all. Kubernetes provides a way to create clusters and run apps on them, but COTS hardware or VM instances aren't magically born into a working cluster node. What Kubermetes does is something that bears no resemblance to what Ansible actually does. Ansible is used to configure nodes without requiring anyone to install any specialized software other than setting up a working SSH connection. I personally use Ansible for stuff like setting up a Kubernetes cluster on COTS hardware running fresh Ubuntu installs using MicroK8s. How exactly do you expect to pull that off with Kubernetes?
2. I mean I don't use ansible with any cloud, only with hardware or legacy on prem stacks - older versions of Cisco, Netapp, Vmware. I prefer a stateful system like terraform to a stateless one like ansible.
3. I like typed languages. I hate yaml. Logic in ansible playbooks (yaml) is inevitable and a nightmare at scale.
4. Having moved to a container orchestrator, all of my nodes are immutable, I do not change or modify them. Hardware and VM instances _can_ be born magically into existence. Nearly all infra providers support [cluster-api](https://cluster-api.sigs.k8s.io/) or some other autoscaling controller. Network infrastructure can now be managed with TF, so I go that route.
> Ansible solved a large problem (config management) before the kubernetes era, but containerization accomplishes the same goal for most applications before deployment.
Depends on the size of your business. For small-medium size businesses, Ansible and VMs require much less support and developer knowledge than Kubernetes and containerisation.
I worked for a business with a million customers who served them using 10 VMs.
I don’t disagree, but what is there for a terraform-like state config management system for bare metal and VMs when they are necessary? What provisions the machines that run the clusters?
I see Ansible as a glorified task runner and every time I’ve used it, never get the same results twice. Idempotency is by convention only and if a single step fails it can be hard to recover.
Nix has its warts however I think what Nix tries to achieve is what most people want on bare metal instead of Ansible. Declarative, you describe the end state then nix makes it happen. Exactly the same as Terraform.
It’s been a very long time since I’ve used Chief/Puppet but found them much better than Ansible also. The thing is any professional job I do now, every one uses Ansible as much as I dislike programming in YAML.
I agree with you there as well. Ansible was great in theory but I’m with you in that I feel like I rarely got idempotent results like I would with Terraform.
Hate to only shill hashi stack but packer if you must. All you need is a container runtime and linux kernel. After that you shouldn't have to think about the core node.
If you're _really_ bare metal - build the base image, boot pxe and run apt update - not much more complicated than that.
Kubernetes for small and medium businesses is extremely inefficient. I definitely wouldn't want to be dealing with that at this stage of my business's growth.
Not sure how containerization would help in case of for instance network devices or baremetal servers management. You've picked only a small use case for Ansible, there's much more.
I'm really eager to get to the point where we can work on CRM extensibility and developer experience. We're hoping to bring traditional web development workflows and not re-invent anything.
We opted for a multi-tenant infrastructure for the cloud hosted version so there will be some additional challenges to make it work in that context!
If you wanna see what that might look like, take a look at servicenow, I do basically all my coding in vscode, and ctrl+s saves to the dev server. they have one of the more robust developer environments I've used.
Chinese chat apps have monetised really well through e-commerce. Admittedly this success hasn't be achieved by WhatsApp etc, so it not trivial to replicate.
the revenue comes from the future acquirer of the company paying out to the current shareholders. I don't really believe consumers of telegram are willing to pay for it. It's in the same boat financially as twitter - but got better PR atm.