Its contents is what I consider the meta game of programming. Understanding types seems to be a real boost to think about architecture, implementation, etc.
It’s also surprisingly approachable, unlike most serious mathematical-logic(-adjacent) books. The three-page elementary treatment of the Knaster–Tarski fixed point theorem is worth reading even if you’re interested in cases for which it’s not sufficient (seriously infinite or order-theoretic things).
My 2c: I think the point is that types themselves are just syntax. It's all bits at the lowest level, so there's really no such thing as a 'type' on a Turing Machine tape. It can be expressed syntactically on the tape, but that's the point.
Pierce is specifically excluding dynamic typing from his definition. He goes on to address this point in the introduction to the book:
> The word "static" is sometimes added explicitly—we speak of a "statically typed programming language," for
example—to distinguish the sorts of compile-time analyses we are considering here from the dynamic or latent typing found in languages such as Scheme, where run-time type tags are used to distinguish different kinds of structures in the heap. Terms like "dynamically
typed" are arguably misnomers and should probably be replaced by "dynamically checked," but the usage is standard.
A static type system is syntactic insofar as the syntax itself is what is being analyzed, not the actual runtime behavior.
Its contents is what I consider the meta game of programming. Understanding types seems to be a real boost to think about architecture, implementation, etc.