We are, it's serviceable, but its expensive to run and extremely difficult to debug or try to extend. Spinnaker was designed to solve a different problem (orchestrating deployments onto ec2) and its k8s functionality was retrofitted onto it, and it shows.
IMO because Dhall is unique syntax to the vast majority of developers who have never done anything with Haskell or Elm or whatever, it’s a non starter for large case use. We use Starlark, where the syntax is familiar to anyone familiar with python.
I like your analogy though. For us, the assembly language is using the k8s API directly in Golang. The “compiler” is the golang Starlark interpreter extended with our own config API, like you would implement in Dhall. It’s just in this case, you can implement it in Golang, which has much much better tooling than Dhall does. A typed compiler, debugger, IDE, unit tests.... so much easier to develop and maintain.
For internal k8s config at our org we built an config DSL using Starlark. The golang Starlark interpreter is super easy to use and extend. Starlark is familiar to every developer in our org because we are a Python shop. The tooling then spits out k8s YAML.
Essentially the config language implementation would be the same logic that a helm chart would do, but you’re writing the logic in Go versus a text templating engine. You can easily unit test parts and rely on the compiler to catch basic mistakes. Way better than templating YAML.
We also provide escape hatches so people can easily patch the resources before they get serialized to YAML. People can use that to customize our standard deployment config however they want.
So far this has worked very well and been extremely easy to maintain.
Think about what you just said for a second. "endemic" deaths of innocent people are the litmus test for whether software should be locked down? How is that in any way acceptable?
No, that is not my reasoning. I'm not talking at all about people operating or using devices that carry inherent danger to themselves or others. I'm specifically responding to the parent's argument about what counts as an acceptable threshold at which it becomes okay to compromise an owner's ability to modify these machines. The parent said that its only acceptable AFTER some nontrivial number of deaths, presumably of people in addition to the owner of the machine.
A related question I want to ask is: what is inherent about software that makes it not okay to have restrictions on modifications where lives other than the owner's may be at stake? I see other comments here that say things like "oh these machines should have mechanical limiters etc. that prevent them from being dangerous." But isn't that essentially the same thing, just implemented in hardware and not software?
I'd like to note that I'm not saying that John Deere is in the right here. I just feel that the argument in the parent comment is, for lack of a better word, inhumane.
I think the answer to this is to do more to educate the individuals who will take advantage of this policy so that the actions they take help give a voice to or amplify the voice of part-time working minorities or other marginalized groups.
In other words, I believe we should laud all attempts to build a culture of civic engagement, no matter what your political beliefs are or your personal background. The next step is to help find ways for that civic engagement to be generally helpful rather than harmful.
full disclosure: i work at one of the companies named on this proposal