Hacker Newsnew | past | comments | ask | show | jobs | submit | your_fin's commentslogin

Can confirm; my team spent the past 9 months upgrading an application JDK 8 -> 17, and there were breaking changes even after we got it compiling + running

I would highly recommend the ts-pattern [1] library if you find yourself wanting exhaustive switch statements! The syntax is a bit noiser than case statements in simple cases, but I find it less awkward for exhaustive pattern matching and much harder to shoot yourself in the foot with. Once you get familiar with it, it can trim down a /lot/ of more complicated logic too.

It also makes match expressions an expression rather than a statement, so it can replace awkward terenaries. And it has no transitive dependencies!

[1]: https://github.com/gvergnaud/ts-pattern


> It almost always breaks every possibility to programmatically process or analyze the config. You can't just JSON.parse the file and check it.

Counterpoint: 95% of config-readers are or could be checked in with all the config they ever read.

I have yet to come across a programming language where it is easier to read + parse + type/structure validate a json/whatever file than it is to import a thing. Imports are also /much/ less fragile to e.g. the current working directory. And you get autocomplete! As for checks, you can use unit tests. And types, if you've got them.

I try to frame these guys as "data values" rather than configuration though. People tend to have less funny ideas about making their data 'clean'.

The only time where JSON.parse is actually easier is when you can't use a normal import. This boils down to when users write the data and have practical barriers to checking in to your source code. IME such cases are rare, and most are bad UX.

> Side effects in constructors

Putting such things in configuration files will not save you from people DRYing out the config files indirectly with effectful config processing logic. I recently spent the better part of a month ripping out one such chimera because changing the data model was intractable.


I came across https://kdl.dev recently, and it has become my favored yaml-replacement when normal code doesn't work.

The data model is closer to XML than JSON, though, so unfortunately it's not a drop-in replacement for yaml.

Small sample:

  package { 
    name my-pkg 
    version "1.2.3"
    dependencies { 
      // Nodes can have standalone values as well as // key/value pairs. 
      lodash "^3.2.1" optional=#true alias=underscore 
    }
 }


I wish they had some sort of include/composable file mechanism though


Very roughly, Zig is much closer to Go in terms of philosophy of language design, where as Rust is much closer to Go in terms of the kind of software people write with it.

Go and Zig are both aggressively small and portable languages, and e.g. prefer explicit `defer` where Rust would handle cleanup implicitly. Unlike Zig, Go and Rust are both widely used and mature ~tier-1 languages with a deep bench of native libraries. However, there's a deep gulf between their philosophies. Go values a shallow learning curve and making code easy to write, whereas Rust values expressivness and making it hard to write bad code. This is not to say that you can't write Good Software in Go or make money with Rust code. However, the languages tend to attract distinct groups of programmers, and there's a reason flamewars crop up between their proponents with unusual regularity.

Zig is notable for targeting C's niches of having lots of structural flexibility in memory management, being good at calling C code, and building C programs. C is the native language of most truly powerful interfaces (operating systems, graphics, cryptography, databases, other programming languages, device drivers, low-level networking, etc.), and programs primarily concerned with interfacing with such things tend to be better off written in C. Working with C code is in the happy path for Zig, while other mainstream programming languages demand significant specialized expertise or wrapper libraries (1). Wrapper libraries aren't always available, are almost always less useful than the original, and only save you /some/ build headaches. This makes Zig a candidate in many situations where C - and /maybe/ C++ - are the only real competition.

In short:

If your interest is learning new ways to think about writing programs, Rust will give you a more distinct perspective than Zig. Like Go, there will be lots of libraries and guides to help you out.

If your interest is in learning to write a kind of program you couldn't in Go and think "closer to the metal", Zig is likely better suited to task.

(1): This isn't nearly as true for C++, but there's still enough impeadence mismatch between idiomatic C and C++ to invite interfacing bugs and wrapper libraries.


Other languages are much closer to Go than Zig. A few are Vlang (highest percentage), Odin, and C3 (in terms of how Zig is compared). In terms of C interoperability, C3 (highest percentage) and Vlang make this a strong feature of their languages too. All, in this group, have aspirations of being used "closer to the metal". For instance, there is the Vinix OS[1], from Vlang.

Rust coming across as a bit different, from those used to C family languages, has arguably a lot to do with its OCaml[2] (ML) roots.

[1]: https://github.com/vlang/vinix

[2]: https://en.wikipedia.org/wiki/OCaml


Thanks for the explanations!

I do really like Go's simplicity which is part of why Zig sounded interesting to me. Rust is definitely more mature though and "feels" more useful from the outside, but that could also just be all of the "I rewrote X in Rust" projects out there.

I'm surprised to see that Zig is closer to C than Rust, and also surprised that rust isn't _more_ like go. I'll probably start with rust just to try something different.

Like another commenter mentioned, I'm probably going to end up making a couple toy projects in both just to try them out. I don't have much of a reason for either of them at work, so it's just for personal projects and knowledge.


The author addresses this nicely in an earlier part of the blog book: https://jerf.org/iri/post/2025/fp_lessons_purity/


The way zod and arktype generally handle this is by providing treating the schema as the source of truth, rather than a type. They then provide a way to define the type in terms of the schema:

  // zod 3 syntax
  import { z } from 'zod'

  const RGB = z.schema({
    red: z.number(),
    green: z.number(),
    blue: z.number(),
  })
  type RGB = z.infer<typeof RGB>
  // same thing as:
  // type RGB = { red: number; green: number; blue: number };
For the initial migration, there are tools that can automatically convert types into the equivalent schema. A quick search turned up https://transform.tools/typesgcript-to-zodghl, but I've seen others too.

For what it's worth, I have come to prefer this deriving types from parsers approach to the other way around.


With Zod you can build a schema that would match an existing type. Typescript will complain if the schema you build does not match the type you are representing, which is helpful. From memory:

  import { z } from ‘zod’

  type Message = { body: string; }

  const messageSchema: z.Type<Message> = z.object({ body: z.string() })


I must further prothelitize Amazon Ion, which solves for most of the listed complaints and is criminally underused:

https://amazon-ion.github.io/ion-docs/

The "object and array need to be entirely and deep parsed" and "object and array cannot be streamed when writing" are somewhat incompatible from a (theoretical) parsing perspective, though; you need to know how far to skip ahead in order to do so.

I agree that it is silly to design an efficiency-oriented format that does neither, though. Ion chooses to be shallow parsed efficiently, although it also makes affordances for streams of top-level values explicitly in the spec.


In this context, I think the prime advantage would be that instead of: - Managing/setting Ubuntu/$distro for the host - Installing Docker compose on host - Writing a docker-compose.yaml file to declare your container architecture - (potentially) Writing a systemd service to bring docker-compose up with the host boot

You just: - Mange/setup nixos - Add container architecture definition to nixos config

The containers, being systemd units, would have all the normal systemd log management like the rest of your system, instead of having to dig through docker-compose logs with a different mechanism than "normal" systemd service logs.

You'd also get all the normal benefits of a nixos system: the config file can be placed on a new system to completely reproduce the system state (modulo databases et all), rollbacks are trivial, etc.


This hits the nail on the head. Well said.

I really like being able to manage the managing and host machine in one configuration. It's a blessing from an Op's perspective.


For the TUI inclined, lazygit [1] and magit (emacs) [2] both have quick and intuitive ways of handling this. They're also both wonderful companions to the git cli for day to day version control.

[1]: https://github.com/jesseduffield/lazygit

[2]: https://magit.vc/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: