> it's not the kind of surface level "aesthetically beautiful" readability that tickles the mind of an abstract thinker
Rather, the sort of beauty it's going for here is exactly the type of beauty that requires a bit of abstraction to appreciate: it's not that the concrete syntax is visually beautiful per se so much as that it's elegantly exposing the abstract syntax, which is inherently more regular and unambiguous than the concrete syntax. It's the same reason S-exprs won over M-exprs: consistently good often wins over special-case great because the latter imposes the mental burden of trying to fit into the special case, while the former allows you to forget that the problem ever existed. To see a language do the opposite of this, look at C++: the syntax has been designed with many, many special cases that make specific constructs nicer to write, but the cost of that is that now you have to remember all of them (and account for all of them, if templating — hence the ‘new’ uniform initialization syntax[1]).
This trade-off happens all the time in language design: you're looking for language that makes all the special cases nice _as a consequence of_ the general case, because _just_ being simple and consistent leads you to the Turing tarpit: you simplify the language by pushing all the complexity onto the programmer.
I considered making the case for the parallels to Lisp, but it's not an easy case to make. Zig is profoundly not a Lisp. However, in my opinion it embodies a lot of the spirit of it. A singular syntax for programming and metaprogramming, built around an internally consistent mental model.
I don't really know how else to put it, but it's vaguely like a C derived spiritual cousin of Lisp with structs instead of lists.
I think because of the forces I talked about above we experience a repeating progression step in programming languages:
- we have a language with a particular philosophy of development
- we discover that some concept A is awkward to express in the language
- we add a special case to the language to make it nicer
- someone eventually invents a new base language that natively handles concept A nicely as part of its general model
Lisp in some sense skipped a couple of those progressions: it had a very regular language that didn't necessarily have a story for things that people at the time cared about (like static memory management, in the guise of latency). But it's still a paragon of consistency in a usable high-level language.
I agree that it's of course not correct to say that Zig is a descendent or modern equivalent of Lisp. It's more that the virtue that Lisp embodies over all else is a universal goal of language design, just one that has to be traded off against other things, and Zig has managed to do pretty well at it.
> I don't really know how else to put it, but it's vaguely like a C derived spiritual cousin of Lisp with structs instead of lists.
Zig comptime operates a lot like very old school Lisp FEXPRS before the Lisp intelligentsia booted them out because FEXPRS were theoretically messy and hard to compile.
As someone who loves Lisps, I still have to disagree on the value of the s-expression syntax. I think that sexps are very beautiful, easy to parse, and easy to remember, but I think that overall they're less useful than Algol-like syntaxes (of which I consider most modern languages, including C++, to be in the family of), for one reason:
Visually-heterogeneous syntaxes, for all of their flaws, are easier to read because it's easier for the human brain to pattern-match on distinct features than indistinct ones.
Rather, the sort of beauty it's going for here is exactly the type of beauty that requires a bit of abstraction to appreciate: it's not that the concrete syntax is visually beautiful per se so much as that it's elegantly exposing the abstract syntax, which is inherently more regular and unambiguous than the concrete syntax. It's the same reason S-exprs won over M-exprs: consistently good often wins over special-case great because the latter imposes the mental burden of trying to fit into the special case, while the former allows you to forget that the problem ever existed. To see a language do the opposite of this, look at C++: the syntax has been designed with many, many special cases that make specific constructs nicer to write, but the cost of that is that now you have to remember all of them (and account for all of them, if templating — hence the ‘new’ uniform initialization syntax[1]).
[1]: https://xkcd.com/927/
This trade-off happens all the time in language design: you're looking for language that makes all the special cases nice _as a consequence of_ the general case, because _just_ being simple and consistent leads you to the Turing tarpit: you simplify the language by pushing all the complexity onto the programmer.