I've always thought a nifty thing about algebraic data types is the ability to deduce their size (in the set theoretic sense, the number of possible elements) by algebra. Consider Bool which is a fundamental primitive type with size 2. Now consider the product type Bool × Bool × Bool (a three-tuple), it's clear that it has size 2×2×2=8. On the other hand if we have the sum type Bool + Bool + Bool (a discriminated union), it's also clear it has size 2+2+2=6. And this works with recursion too. The list type whose element type is A is defined as either Nil or an element plus the remainder of the list: that's exactly L(A)=1+A*L(A). And that directly solves to L(A)=1/(1-A). If you've studied generating functions or even just Taylor series, you know this can be expanded into 1+A+A^2+A^3+… which perfectly matches the intuition of a list having any number of elements. What if I want to define a special list that only allows even number of elements? It would be L(A)=1+A*A*L(A), or 1/(1-A^2) which expands to 1+A^2+A^4+… Both of course are "infinite" but it allows us to distinguish precisely what kind of "infinity."
This also helps with figuring out isomorphisms, i.e. when two types are equivalent. So for example, if someone tries to represent an "Either Bool Bool" (2+2 cases) with three cases, we see immediately there's a singularity in there which represents at least two of the possible values of the original, i.e. we won't be able to losslessly round trip.
The thing about isomorphisms between algebraic structures is that they require a representational+inversible mapping (what you mentioned) that preserves relations (something you didn't mention). That distinction is actually relatively important in programming because you might be using something like polymorphism to operate on two representationally equivalent types in different ways.
If you have a base class CollidableObject that both PoolBall and GlobOfHoney extend, they could be representationally equivalent (both having two vector member fields, one for Position and one for Velocity) but with different implementations of CollidableObject::HandleCollision(CollidableObject& target) to reflect the fact that PoolBall can elastically collide with certain kinds of targets but GlobOfHoney only collides inelastically. So you'd not want to treat them as isomorphic.
I'm mostly just being pedantic but one of my pet peeves is when programmers use terms like Isomorphism or Vector in a way that is partly but not entirely true to the mathematical definition. I think "bijection" is a better term for what you're talking about, because it's both necessary and sufficient for information-preserving mapping between types.
This works with function types too. You can get a hint that objects with signature A -> (A -> A) have an isomorphism with objects of signature (A × A) -> A, by observing that the cardinalities are equal as algebraic expressions, ( |A|^|A| )^|A| = |A|^( |A|*|A| ). That one's just "currying" [0].
Yep, that’s one of the more mind-blowing "tricks" as it were. Maybe less moreso, but still nifty, is that
|T -> bool| = |bool|^|T| = 2^|T|
…which also happens to be the number of subsets of a set of size |T|. And indeed that’s what a predicate function is, a subset-generator; for every subset there is exactly one predicate that picks the members of that subset from the larger set.
Sure, any function T -> U partitions T into |U| subsets. But the case where |U|=2 is special because 2^N is special in being the number of subsets of a set of size N.
Author here. It's funny you mention this. I was (am still) writing a post about it, and I went on a rant that while it's a (fun!) way to do "algebra with types", it's not actually the original meaning of it afaik, and then I went to look for a source for it and that grew into this whole blogpost.
Disclaimer 1: I have the same last name as Burstall's collaborator, Joseph Goguen. If we're related, it's VERY distant, so I truly can't add anything.
Disclaimer 2: I'm going to tie this back into algebras.
Setup 1: However, I have been a little obsessed with Burstall and Goguen lately and I've been SLOWLY working through some papers and trying to make sense of it. I'm just an F# programmer who likes languages, not an academic.
Setup 2: I've known about Burstall/Goguen for some time, but I never understood the gravity of their respective work until recently. And I have quite literally been annoying with my tech friends and family with all my "Oh my god!" moments as I've been going papers and festschrifts.
This whole time, I haven't been sure if I've been interpreting what I'm reading correctly, so your post has been wonderful in settling some of that.
I understand that some people may resist the idea of describing algebraic data types as analogous to the fundamental theorem of arithmetic for types. Burstall and Goguen were motivated by creating tools and systems built on solid foundations, such as universal algebras, to utilize analytical tools like logic systems and category theory for deeper insights.
However, I believe they were particularly focused on how algebraic data types model the cardinality of state spaces and transitions between states for this reason:
They were designing a specification system that helped working programmers design systems accomplish two things (In today's parlance):
1. Make invalid states impossible.
2. Make invalid transitions between states impossible.
While learning Maude (OBJ's successor), I've been particularly fascinated with how its parent language let you define very fine grained domains.
And it makes sense, because if you're going to search through that domain to find counter-examples violating your theories, it makes sense to reduce the search space proactively.
But what really drove that for me (and maybe I'm projecting here), is how that seemed to correspond to a little program I wrote that constructed a bijection between the natural numbers and all closed lambda functions.
But what really convinces me that they had that in mind is how Maude and OBJ let you add attributes to constructors to identify them as associative or commutative or whether it has an "id" where the impact of these attributes on the constructors actually reduces the state space.
Again, I could be projecting here, but I would love to hear your thoughts and I am excited to read your next post!
I learned a lot from this and your other comment here!
The way I connected the dots when I read about Clear (and OBJ) is that it let me explain free algebras by example, by just showing Clear code. That hopefully makes the concept more accessible than the dry mathy definitions that would have the eyes of most readers glaze over.
I must say that I'm not familiar with Burstall and Goguen's line of research beyond what I've written so far. In the future I'd especially like to know what lessons these specification languages can teach (or already taught) about ML-style module systems, OO languages, and formalizations of mathematical structures in proof assistants (Clear reminds me of the little I've seen of Hierarchy Builder in Coq).
One concrete question I have is how Clear or OBJ or Maude code is actually meant to be run.
Your description of these languages reminds me of model checking tools like TLA+/Alloy. Does Maude target a similar niche?
I thought the way you explained free algebras was great. I am absolutely not an expert in Clear/OBJ (I'm very much still learning), so I personally appreciated your presentation of it.
To answer your question regarding how these languages are meant to be run:
1. You know more about Clear than I do, so I'm at a loss.
2. I've read through OBJ's documentation, but I've never used it. But Goguen is usually demonstrating it as a language for quickly prototyping mathematical objects, experimental programming language, logical systems, theorem provers, etc. The manuals usually focuses on applying reduction rules to verify models (He claims reduction is often the go to method for proving properties), so I'm not entirely sure how much it parallels TLA+/Alloy yet.
3. Maude very much feels like OBJ where they added additional features to give it TLA+/Alloy like features that allow you to check invariants through searching. It's worth pointing out that Maude is very reflective and that many features of Maude are written in Maude on top of a fairly small core based on rewrite logic. It's not unusual that while you're reading the documentation for an LTL checkers or an SMT solver feature, that they'll show you the Maude code that defines that feature.
To answer your question: Maude does targets a similar niche and has been used to verify concurrent applications, distributed systems and protocols, but I really haven't explored that side of it yet. I've mostly been playing with Maude's functional modules and really spent much time with Maude's object-oriented modules which reflect the semantics needed to simulate and verify distributed systems.
I really don't like this view (but looks like I'm a small minority).
Types shouldn't have an algebra similar to numbers, they should have an algebra similar to sets. While for numbers A + A + A = 3A, on sets A ∪ A ∪ A = A.
Types are all about predicate satisfaction, and predicates don't change their value because they can be proven in more than one way. In practice, defining type sums as tagged sums leads to your algebra, and also leads to a lot of confusion between structure and denomination that would be completely avoided if sums and tags were independent operations.
IMO your view, if not common, is kinda intuitive from the point of view of someone trained in set theory/classical logic (i.e., most people before they find an interest in type theory; I myself was here a few years ago). If it feels like you're in a minority, it's because most others who might have the same opinion haven't reached the point where they could articulate it; many just give up way earlier on understanding type theory.
It certainly makes sense to a set theorist that union and intersection are more primitive than sums and products. From that perspective, type theory seems like a roundabout and unnecessary way of doing things.
The challenge is to imagine type theory as a radically different foundation to logic (because that's what it is), with sums and products as primitives instead, for one superficial difference.
It's hard to explain in words. I personally didn't get it without years of first-hand
experience with proof assistants based on type theory. Constructivism and category theory are other possible gateways into the right mindset.
There is one key difference which I think highlights the charm and simplicity of type theory without delving in technical details too much. The syntax of set theory is in two layers: there are sets, and there are propositions (about elements of those sets). In contrast, there is only one such "layer" in (dependent) type theory: types.
Types play the role of both sets and propositions if you want to encode set theory in type theory. The point is that types can be much more than either of those things, and that idea is inherently difficult to convey to someone whose only conception of logic is sets and propositions.
The "correct" (in the categorical sense) sum operation for sets is disjoint (tagged) union, not union. So A + A + A = A ∐ A ∐ A, which is isomorphic to 3*A = (3,A) (i.e. each element of A ∐ A ∐ A carries the information of a tag from 3 and an element of A). All is well.
You are not wrong but you are indeed a minority. If you read research papers you will definitely find papers that use union in the way you describe (when it comes to subtyping), and then use a different notation for record extension (basically the kind of discriminated union operator). It's essential to differentiate between them using different notations. For the former, for example the Animal class and the Cat class are unioned into the Animal superclass. For the latter, you have an Animal object and a Cat object that's separate.
It depends on what operations you pick. I like ML-style operations on types because they make set-theoretic sense.
C = A + B means that type C is a union of A and B, aka a "sum type", so its cardinality is the sum of cardinalities of A and B. In Typescript or Python terms, C = A | B, which I read that an element of type C can be either an element of A or an element of B.
C = A * B means that C is a Cartesian product of A and B, and its cardinality is the product of those of A and B. In Python terms, C = tuple[A, B]. Hence something like Point = {x: number, y: number, z: number} is not "3 * number", but number^3, because it's a point in the R^3 space.
We reasonably assume that A ∩ B = ∅ in all practical cases, such as `int | None` or `string | number `, etc.
Yes, a Cartesian product is tagged (each axis is), but it does not affect the fact that len(A * B) = len(A) * len(B), hence it's logical to use multiplication to denote "record types".
My answer would be: No they are not the same type. Yes they have the same amount of bits representing them. They might have different RAM usage because of alignment.
Here's a thought I have had cached in my head from a long time ago. Ignore me if this is obvious or dumb; I don't know anything about type theory really.
There are basically two varieties of types: predicate types and enumerated types. Predicate types are functions that return true for objects that have their type. Enumerated types in some way list all the values the type can have. So "Boolean" is an enumerated type, but "even integer" is a predicate type (you test if it's even. Nevermind that you could enumerate the even integers within your language, you wouldn't). (I'm pretty sure these varieties are dual in the category-theoretic sense of reversing arrows, but I don't know much about that either.)
So what happens is that the cardinalities of enumerated types obey arithmetic, but the cardinalities of predicate types do not. Predicate types only have a notion of "cardinality" anyway when you restrict the possible values to some finite universe. The intersection of two predicate types (x -> f(x) && g(x)) may be of the same cardinality as each of the predicates (if they are equal) or it might be the same cardinality as one of them (if f(x) implies g(x) then {x | f(x)} = {x | f(x) && g(x)}) or it might be some new value entirely.
All of the cases where algebra works on types is cases where they're being used as enumerated types. The usual definition of a list in functional languages ([] a = [] | a : [a] or w/e) enumerates the possibilities, hence it obeys algebra. Were it described as a predicate, on the other hand, it would not (necessarily) obey algebra.
When you have "predicate types" in practice you need restrictions on what kind of predicates are allowed. If you allow anything at all, that's when you get Russell's paradox. That tells us the general comprehension principle is invalid, that is, you cannot allow the construction of a set from an arbitrary predicate (or definite condition).
> Predicate types only have a notion of "cardinality" anyway when you restrict the possible values to some finite universe.
It is much more subtle than that. Regular set theory allows cardinalities far bigger than the least infinite cardinal, aleph-zero. But of course it's computers that we are talking about. Even then it's wrong. What if I define a predicate that's returns true for every natural number? Then it's infinite. But what if I define a complicated enough predicate that it does not halt for some numbers? Then we get back a recursively enumerable set that may or may not be infinite. In essence, when you introduce predicates, you no longer really care about cardinalities you start to care about the complexity of the predicates.
For your purpose, your predicate has to be sufficiently simple that it denotes a recursive set (not a recursively enumerable set).
As a predicate Russell's paradox translates to an infinite loop in programming, not an impossibility. And yes, you get in trouble if you define a set from an arbitrary predicate, but a type is not (necessarily) a set: a type can be a predicate without asking that it be explicitly cast as a set. You can certainly consider the "set of all lists", but you can't enumerate it, because you can have a list of any type and you don't know all the types that can be put into a list.
Anyway, I feel like you missed my point. It's that, if you are wondering why types seem to obey algebra but ALSO obey the laws of set theory and the two seem compatible, that this is why: predicates obey the algebra of sets while enumerated types have cardinalities which obey algebra.
I don't think a distinction between "predicate types" and "enumerated types" is particularly useful. As you pointed out, you can enumerate all even integers. All enumerated types can be represented as predicate types instead, with the predicate "does this belong to the enumerated set?". All countably infinite predicate types can be enumerated. So focusing on the difference seems like a red herring to me.
> All of the cases where algebra works on types is cases where they're being used as enumerated types.
No, algebra works perfectly fine "predicate types" as well, because all the predicate types we care about (constructible / countably infinite) are enumerable too.
> Predicate types only have a notion of "cardinality" anyway when you restrict the possible values to some finite universe.
I think you've misunderstood me. If the predicate types are enumerable then they're not predicate types, they're enumerated types. And if the enumeration turns out to be infinite then you get the algebra of infinite cardinals, not the algebra of numbers, which looks very different. But it still explains why they obey algebra.
I don't understand you. Everything you can do on a computer is enumerable. So every type is enumerable. Then predicate types don't exist.
Infinite cardinals are not very useful when it comes to computers because there's only one infinite cardinal that's relevant: aleph-zero. All other infinite cardinals are beyond what computers can handle.
I'm not talking about the notion of enumerable sets. I'm talking about whether a type explicit enumerates its members (like A = B | C or A = [] | [A]) or whether it is a contract which does not know how many members it has (like a class Foo, or a predicate like "lists of Foos" or just "even integers". You can do algebra on the former, including sometimes divergent-series style algebra like writing 1/(1-X) = 1 + X + X^2 + ...; you can't do algebra on the latter.
It doesn't matter that you could in principle enumerate the, say, the even integers, either finitely (since there are finite values in your chosen language) or infinitely (in the sense of having cardinality aleph-zero). It matters whether the type definition does that. If the definition of an even integer was `Even = [] | S^2(Even)` with S as the successor function, then it would fall in the first category and some algebra on the type would work (although it would be silly to do that).
> I'm talking about whether a type explicit enumerates its members (like A = B | C or A = [] | [A]) or whether it is a contract which does not know how many members it has
And I'm asserting that this is an implementation detail that doesn't matter to type theory and would be unnecessary and distracting to think about.
> you can't do algebra on the latter.
Yes, you can. You can take sum types and cross products of the latter. You can use them as "type variables" in "type algebraic equations" That's what "doing algebra" on types means. Whether or not you can count how many members are in the result.
> It doesn't matter that you could in principle enumerate the [type] ... It matters whether the type definition does that.
This is the root of our disagreement. You should be able to replace the definition (implementation) of a type, e.g. a list of the even integers between 0 and 100, with an equivalent definition (implementation), e.g. "all integers x such that x is even and x is between 0 and 100", and your type theory must not care. If your type theory cares about the exact implementation details of your type, it's hard to even call it a theory at all, because it's not working at a consistent level of abstraction. "Replace implementation with equivalent implementation" must always be invisible to everything operating in higher abstraction levels. From the point of view of any reasonable type theory, enumerated types and predicate types are interchangeable.
Question: take type X to be the cross product of some predicate type and some enumerated type. Is X enumerated or predicate? You've claimed they can't overlap. Which is it? A type theory that frets over this distinction is a bad type theory.
Imagine trying to categorize all your functions as either "bit-fiddling": those that use bit manipulation to calculate, and "arithmetic": those that use + - * / to calculate. Who cares? You can convert one to the other. You can do both at once. It's an implementation detail of your function and you should not be worrying about it at the functional abstraction level.
> take type X to be the cross product of some predicate type and some enumerated type. Is X enumerated or predicate? You've claimed they can't overlap.
A type like X = enum | predicate is enumerated, since it has two options that it explicitly lists, one of which happens to be a predicate.
> Yes, you can. You can take sum types and cross products of the latter.
You are talking about a different kind of algebra than I am, which should be pretty obvious from the context of this conversation. This was all in reply to somebody who was put off by the fact that you can do literal actual algebra on types, like L(A) = 1 + A * L(A) and then rearrange it into an infinite sum, and was saying that types ought to be obey something like set algebra instead, wherein A ∪ A = A. I am saying, the sense in which the former algebra works is due to the existence of types which enumerate their variants and therefore act like numbers (or, like, decategorify to numbers, or w/e).
Anyway, I understand that I am not well-versed in type-theory, but your thickheaded refusal to see my actual point and instead dismiss me with your confused counterpoints is just rude and unnecessary. Although I might be saying something in slightly incorrect terminology or missing something subtle, I'm not missing the non-subtle things you're saying and it should really not be hard to see the sense of what I'm getting at. So... cut it out, please.
> Enumerated types in some way list all the values the type can have
And then said:
> X = enum | predicate is enumerated, since it has two options that it explicitly lists
But you can't list its values by your own definition. You defined types as being either A or B, and accepting that definition, you can come up with types that are neither. This is my point: your model is inconsistent. Your categorizations are not useful. You should not use that model. That is not a good way of thinking about the world.
>your thickheaded refusal to see my actual point and instead dismiss me with your confused counterpoints is just rude and unnecessary.
I honestly see myself as attempting to help you here. I see you making a mistake, and I'm trying to convince you, and to a lesser extent any other readers, not to make that mistake.
This has nothing to do with you using "wrong terminology" or not understanding type theory; I've accepted your definitions, and shown that they are inconsistent with themselves. Your math doesn't work. Your model is not helpful. The fact that you cannot define tuples of (enum, predicate) as either enum or predicate by your own definition shows that your model fails. Don't use it. Move on. Dismissing you is not rude nor unnecessary. It is highly necessary that we dismiss each others' ideas when those ideas don't work.
I get that that sounds harsh. It's great to come up with interesting ideas and categorizations. It's even better to throw them away when you realize they don't work. Do that seven thousand more times. Six thousand, nine hundred and eighty seven of them will be thrown away. The 13 remaining ones will be worth it. But you'll never get there if you cling to ones that are provably wrong. This one is. From my point of view, you are the one being thickheaded here, by clinging to this one idea as if it's the only one you'll ever have. It's not. I strongly urge you to get better at killing your own ideas to make room for new ones. Accept proofs that your ideas are wrong as improvements in your knowledge.
> From my point of view, you are the one being thickheaded here, by clinging to this one idea as if it's the only one you'll ever have.
It's just that you and I haven't yet talked about my idea yet, because you're talking about something else, then failing to realize it and thinking I'm ignoring your rebuttals because I'm stubborn rather than because they're irrelevant to my point.
Anyway I will try one more time for posterity:
> But you can't list its values by your own definition.
You have misunderstood me. The X I have described enumerates its values in the sense I am talking about: it's one or the other, type A (and enum) or B (a predicate). I am talking about it being enumerated _in terms of other things_. If for some reason you had X = A | B, as a definition, and then you the cardinalities of A or B, then you would be able to say |X| = |A| + |B| (or |A| + |B| - |A ∩ B| if they overlap). This is the sense in which algebra works. That is just... obviously true, right?
The interesting parts show up if you have recursive definitions (in which case you get these infinite series) or if some of the sets have cardinalities that you don't know how to use for algebra (like a predicate over an un-enumerable set of values...) ... but in some cases algebra still works, and I don't know the details about that.
In the case of "predicate" types, it is much harder to see how arithmetic would work in the cardinalities, because I could write down something like Prime = x: integer & isPrime(x) and regardless of how you think about |Integer| in your language/setting/etc, you have a complex predicate over integers that's not going to correspond to any obvious algebra. And yet if I wrote Q = Prime | true | false you can still say that |Q| = |P| + 2 and that works fine. On the other hand if I wrote Q = Prime | Fibonacci then good luck because Prime ∩ Fibonacci is hard to do anything with.
I tried to figure out the history of algebraic data types for a little bibliographic section in my book (Book is https://scalawithcats.com/ Chapter 2 is algebraic data types. My history is much less thorough than the linked post!) I'm glad I seemed to have got it correct.
The evolution of programming language features is quite interesting. The first presentations of a concept often are very rough as people struggle to present an idea in terms of what is currently known. Even terminology can change a lot. E.g. generalized algebraic data types started out as, variously, indexed types, first-class phantom types, and more, before the current terminology was adopted. (Adoption by a reasonably widely used language, in this case Haskell, helps to fix terminology.) I wouldn't be surprised if the term indexed types comes back into vogue if future languages start taking codata more seriously.
The article pains to distinguish "free algebra" from "sums and products", but a "free algebra" is a ring, and a ring is an abstract representation of sums and products. Yes, for the mathematician there are finer distinctions, but after a first pass of data compression for the lay audience, they are the same thing.
When your workday programmer understands that AND and OR are sums and products, and concatenation is a non-commutative product (sets are commutative, but lists are not), they are halfway to free algebra in one day, without a masters degree in mathematics. The written language of free algebra usually uses the + and • symbols.
"Free algebra" here comes from universal algebra/category theory, which is quite distinct from the "free algebra" of ring theory (having nothing in particular to do with sums and products). Unfortunately "algebra" might be the most overloaded term in mathematics.
(The article says in the second paragraph: "the name comes from universal algebra". The wiki article you linked disambiguates itself with universal algebra right at the top.)
"Algebraic data type" in this sense is synonymous with "inductive data type".
The values of an algebraic data type do not form a free ring or any ring. You cannot generally add or multiply them. You may write a particular type with constructors intended to serve this purpose, but the laws defining a ring will not hold.
A separate fact is that types (not their values) belong to a free semiring "up to isomorphism" (where sum and product are the categorical coproduct and product). It is this fact that seems to have been conflated with the "algebra" of "algebraic data types". The disambiguation of this with the sense from universal algebra is the very purpose of the article.
In Rod Burstall's 1980 paper titled, "ELECTRONIC CATEGORY THEORY", he explores "using algebraic and categorical ideas to write programs", and explains how his interest in "using algebraic ideas in programming goes back to work with Peter Landin, who first interested me in universal algebra and category theory" in the '60s.
Also, you have to love this introduction:
> If we can have electronic music, why not electronic category theory? 'Music has charms to soothe the savage breast', said Congreve Has category theory less charm? Can we not make the electrons dance to a categorical tune?
You have to love his whimsy.
I've been playing with lately Maude (Based on OBJ and Clear) and it really seems like the motivation for algebraic data types has been about driving categorical algebras / universal algebras from the beginning. At some point it clicked how it felt like I was defining categories (modules), objects (sorts) and morphisms (ops) and then adding constraints writing equations.
I don't know Maude or category theory well enough to say it qualifies as an "electronic category theory" for some defined set of categories (small?), but I can see that vision, and how it's a little disappointing this vision isn't better understood.
FYI, Burstall sought out Jim Thatcher from the ADJ group and Thatcher, who referred Burstall to Goguen (Thatcher was focused on stopping the Vietnam war), who was the ADJ group's practicing logician / category theorist. From what I read, the reason Goguen was even at the ADJ group in the first place is MacLane recommended him for the position while Goguen was studying under him in Chicago.
When you consider the timeline in 1977 when Burstall/Goguen first met. They figured out the semantics for this language very quickly, define institutional model theory, which formalized a minimal definition of "what is a logic?" and then used that as the basis for creating Clear, which inspired Burstall's Hope and Goguen's OBJ.
The fact that they did all that in such short order is very telling (IMO) for how long they had been each been concocting schemes to get universal algebras and category theory into computer science.
The issue is then boolean functions do not form a polynomial ring with this "sum", because unlike XOR, OR is not a group operation - it doesn't have a well defined inverse.
This is an important distinction between whether we are preserving structures on rings or on a group. An example would be mapping positive to negative numbers which is a group level transformation but not structure preserving for order while still being invertible
Tangential reference to Robin Milner .. couple of big contributions to formal logic in computing, Turing Award, FRS (Fellow Royal Society), FRSE (Fellow Royal Society Edinburgh), etc. Four Doctoral students .. and this gem:
15 or 20 years ago there was an online video of Milner giving a talk at Inria, about building the "tower of digital computing" (I recall this expression), but since I never managed to find it again.
The article begins with how ADTs existed in the 80s, before they were called like that, and the ADT Wikipedia article also mentions the language Hope as origin from the 80s.
Record and Variant types were definitely in Pascal and likely already in ALGOL in the 60s.
Why wouldn't these qualify as ADTs by another name equally?
I didn't recall anything like ADTs in Pascal, but I last used it a very long time ago, so I dug into it. A "record with variant part"[1] seems to be equivalent in expressive power to ADTs. However Pascal lacks the other part of the equation, which is the guarantees when taking apart such a type that you will handle all cases, a.k.a exhaustivity checking. In general, I think too much material focuses on the data defining part of algebraic data types, and not the data using part, known as strucural recursion, which is arguably more useful. (You tend to use data more than you define it.)
It's also interesting that more "modern" libraries have forgotten these lessons. E.g. Python and Go have nothing like algebraic data types.
I think that's more like an any type, as you can assign any value to it. A sum type can only take on values that are defined at the time the sum type is defined.
Algol 60 did not have record or structure types. Record types did not really become a thing until the mid-1960s around the time of CPL (1963) - Algol W (1966)
ALGOL 68 did have structs and unions, however, which might be what they meant by "ALGOL in the 60s." That doesn't contradict what you say, though, since 1968 is already in the late 1960s.
I'm not a type theorist and the article isn't super clear IMO, but it sounds like they definitely would. The "algebraic data type" name was obsolete pretty much immediately, in the same way that nobody talks about horseless carriages now. It exists just to confuse people and is probably best avoided.
For me, algebraic data types are not only about joining but also breaking apart (cleaving and cleaving, respectively), so the lack of pattern matching on constructors would be a negative, causing me to say:
(especially as one of the sibling comments was making a big deal about rings)
I’m pretty sure Miranda did not allow the user to directly define term rewriting rules for algebraic data types. Obviously this is from decades ago so my memory could be faulty but I used it a lot for a year or two and do not remember that feature.
Thanks! Here we go:
"Note: In versions of Miranda before release two (1989) it was possible to associate "laws" with the constructors of an algebraic type, which are applied whenever an object of the type is built. For details see Turner 1985, Thompson 1986. This feature was little used and has since been removed from the language. "
fwiw I unwittingly retrod this path; looking back at early versions of my algebraic type library there were all sorts of provisions for automatic smart constructors (commutativity, identities, etc., surely inspired by Goguen) that got dropped from later versions for lack of use.
Oh no, you talked about Miranda without saying in a footnote that "Miranda™ is a trademark of Research Software Limited"! Now you will get a stern letter from David Turner!
I don't know. But I've read quite a number of papers on type systems, type inferences and functional languages in general from late 80s and early 90s, and every time they mention Miranda, there is this footnote that says that "Miranda is a trademark of Research Software Limited", e.g. [0][1][2], so there has to be a reason why they did this. Sure, Turner himself did that as well, see "History of Haskell" [3], "3.8. Haskell and Miranda":
As required to safeguard his trademark, Turner always footnoted the first occurrence of Miranda in his papers to state it was a trademark of Research Software Limited. In response, some early Haskell presentations included a footnote "Haskell is not a trademark".
but that doesn't really explains why others did... so a hypothesis that maybe he did send people who didn't attribute Miranda's trademark a letter of complaint seems somewhat amusing, you know? A small in-joke for those people who brushed against the FP research of the early 90s.