I was referring to Elixir's blog post on static types where the creator of Elixir José Valim wrote: "The Dialyzer project, implemented in Erlang and available for Elixir projects, chose to have no false positives. However, that implies certain bugs may not be caught. At this point in time, it seems the overall community would prefer a system that flags more potential bugs, even if it means more false positives." [1]
From my experience with TypeScript, you really want your type checker to be eager about complaining about type issues, which is why so many people turn on strict mode. In fact, occasionally I'll have to suppress a false positive error with a @ts-expect-error comment.
But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions? The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
> have you found that the libraries you use actually have quality type definitions?
I have some experience with Typescript and it was way better than vanilla JavaScript. Typescript mappings we’re available for most libraries, though for a couple they were not up to date. Similarly for Elixir, the coverage for nearly all APIs is pretty good but also has some holes. In my experience it hasn’t been a big deal either. Any serious library developer puts types alongside their function signatures/implementation.
> The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
Being a functional language, Dialyzer can easily make inferences on untyped code based on ‘success typing’ (what Jose talks about), though it can make for some very obtuse and verbose warning messages. This inference is in addition to safety provided by typespecs which every Elixir library developer typically intermingles with their API surface.
If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help). For the weird warnings I find I just need to add extra typing to my code to narrow it down, which I should be doing anyway
> If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help).
One of the advantages of TypeScript is that VSCode is written in TypeScript, and both VSCode and TypeScript are developed by the same company, so there's a really nice synergy there. I imagine Kotlin users feel the same way using Jetbrains products, and Swift users feel the same way about XCode.
Dialyzer looks interesting, but I can't imagine giving up on the expressiveness of TypeScript. Some of the things you can do with generics, mapped types, intersection types, template literal types, conditional types, and utility types are almost mind boggling. It's difficult to reap all of the benefits of static analysis without some of these advanced type operators. The type manipulation section of the TS manual is really underrated.
Someone for example wrote an SQL parser in TypeScript that requires no runtime code [1]. It can infer the types of an SQL query's result based on an SQL string without any runtime code execution. There was a similar project where someone built a JSON parser entirely using the type system [2]. There's also an ongoing discussion on Github about the the fact that TypeScript's type system appears to be a Turing-complete language with some other cool examples [3]. My point is that the type system is incredibly expressive. You rarely run into an idiom that can't be typed effectively.
The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages. Elixir typing can handle a subset of intersection type which is rather niche, and of course handles union types which are more prevalent. Genetics and inheritance are replaced with a more permissive style of polymorphism with Elixir behaviors and are also part of the typing system.
All languages have their own flavor, and their own pros and cons, and if the advanced types of Typescript work for you, then great! In my own experience, I have found Elixir and it’s typing capabilities to work well for me
> The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages.
The SQL example I linked to isn't something you'd use to interact with a database in production, for that you'd probably reach for an ORM like Prisma. I was just trying to demonstrate the level of type inference you can achieve with TS. Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
> Elixir typing can handle a subset of intersection type which is rather niche
I personally use intersection types quite a bit. If union types are like the logical or operator, then intersection types are like the logical and operator. Being able to define a type that combines one type and another is not a niche workflow for me.
> In my own experience, I have found Elixir and it’s typing capabilities to work well for me
Can't argue with that! Everyone has their preferences.
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Going from a JSON string to a type for it is actually one of the easier examples of inference I can imagine. JSON is a data description format in which all the base types are syntactically distinguishable, it has no variables, no arrows (i.e. functions), no generics. In the topic of type inference, you can't have a much easier example.
SQL is more complex, indeed, but still doesn't seem too crazy if you have access to table schemas. It's also a matter of whether triggers and stored procedures are taken into account, but I assume they're not.
There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I just want to make sure we're on the same page here. The JSON example I linked to isn't inferring the types of JSON that's already been parsed and deserialized, that would be trivially easy in any language (including TS). If I have an object that's been parsed from JSON, I can just use the typeof operator in TypeScript to infer the type of that object.
The example I linked to is taking a serialized JSON string, and parsing the literal characters in the string (characters like double quotes, commas, whitespace, etc) into a type, purely using type annotations. And the structure of that JSON can be of arbitrary nested depth.
All of this is accomplished using template literal types which allow you to make assertions about the contents of a string. In TypeScript you can assert more than just "this value should be a string". You can make detailed assertions about the structure of strings, and that's what allows these parser demos to exist.
When you combine these template literal types with recursive types, conditional types, and TypeScript's infer keyword you can do some pretty interesting type level programming.
Just to further demonstrate the point, there's an interpreter
for the BF programming language, written entirely using TypeScript type annotations [1].
> There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files) and it’s not a huge stretch for it to be able to call a JSON parser on any string knowable at compile time and convert it into a type. That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement. It does sounds cool though.
Metaprogramming is also why generics aren’t really needed in Elixir or Erlang. All specializations boil down to their reified types using macros
> I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files)
It's not the compile time text processing that's interesting, it's the fact that this compile time code can yield incredibly detailed types, and the fact that it's all done purely using type annotations. Almost every language that's used to build websites has some sort of server side templating language where you can sprinkle some code into your html templates.
> That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement.
Maybe, or maybe it's less feasible without an advanced type system. The final version of the JSON example we're talking about ended up being 61 lines of code. Without some concrete Elixir code to inspect, this conversation is becoming very abstract.
Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex. I feel like if I tried to implement all of the features of TypeScript by mucking around with an abstract syntax tree using metaprogramming, I would end up with a difficult to maintain crude approximation of the real thing. I don't think we're giving compiler developers enough credit by saying everything they've worked on can be replaced with a quick macro. José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
After two days of discussion, I think we're reaching a bit of an impasse. Honestly just use what works for you!
> It's not the compile time text processing that's interesting
You've talked right past me. Metaprogramming is not compile time text processing.
> Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex.
No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
> I think we're reaching a bit of an impasse
Not really. I was just saying that what you're talking about is totally possible with metaprogramming. Not making some rhetorical play.
> José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out. However, dialyzer isn't as hobbled as you imagine/purport it to be – the crux of my counterargument – and actual experience trumps speculation or casual reading on the topic.
FYI, guards and pattern matching are one of the reasons why Elixir is so damn expressive, and this kind of function typing isn't available/possible in most non-functional languages
> You've talked right past me. Metaprogramming is not compile time text processing.
I'm not talking past you. I never said metaprogramming is compile time text processing. You said "Elixir can already accomplish compile time text processing with metaprogramming", and I was just pointing out that the text processing itself is not the most interesting part of the example, it's the resultant types.
> No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
But I don't have to reimplement any of this, because Microsoft has already written it. With enough time maybe you could implement dependent typing using metaprogramming for example, but would you then say that Elixir is just as good at dependent typing as Idris, which has that feature built in?
> I was just saying that what you're talking about is totally possible with metaprogramming.
You've graduated from saying "I think someone more savvy with Elixir would know more." to "totally possible"[1]. This does not sound like an argument from experience.
> That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out.
A lot of what you're talking about has already been implemented in other languages. This is not untrodden ground. There's no syntactic sugar for guards in JS (which TS just adds type annotations to), but semantically it's very similar to type narrowing in TS [2], which I use daily. This is not something I've read about casually on the internet.
> This is not something I've read about casually on the internet
Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
> You've graduated from saying "I think someone more savvy with Elixir would know more."
It took further reflection on the idea. It helps to start with at least some epistemic humility, once again the crux of this thread. I'm willing to revert to, "It ought to be possible" given that macros boil down to the final generated code (including typespecs) which is then analyzed by Dialyzer
> Elixir is just as good at dependent typing as Idris, which has that feature built in?
Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
> Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
I think your experience with Elixir and Dialyzer (or perhaps emotional attachment to these tools) is blinding you to the fact that there are features that your favorite language lacks. I gave you a concrete example of something that's a research topic in Elixir (inferring types from guards), which has an existent analogue in TS (type narrowing). You completely ignored that and fixated on the one sentence about experience. Your entire argument is that I should defer to your authority on the topic as an anonymous commenter, because you're supposedly very experienced. It feels like a very hollow appeal to authority.
> It took further reflection on the idea.
But by your standards simple reflection is not enough to establish authority on a topic. You need to have deep personal experience with using macros to parse text before you're qualified to make a firm judgement. You event went so far as to speculate on how TypeScript implements type checking and claimed that the "secret sauce" is probably metaprogramming, despite having never worked on the compiler. This all feels incredibly contradictory.
> Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
Built in does not mean "I might hypothetically be able to write my own macro that does this". It means it already exists in the language today.
Honestly this discussion feels like it's devolved into repetitive bickering, which is why I let your original comment about metaprogramming stand unanswered, and you reacted by following me into a different comment thread to reiterate the point.
You've really misjudged me. I've been programming professionally for 20 years, I have used and still use a variety of languages including Typescript.
> Your entire argument is that I should defer to your authority
Never made such an argument. My argument is that you have less authority on the subject matter than someone who has spent years with the language.
> I gave you a concrete example of something that's a research topic in Elixir (inferring types from guards), which has an existent analogue in TS (type narrowing)
At best a subset of functionality than an analogue. This kind of research is being conducted by experts in type theory outside of Elixir. If it's a problem for Elixir, it's a problem for any other language that would attempt it, and absolutely I would defer to the authority of those experts who have spent years looking at type theory.
> despite having never worked on the compiler
Knowledge of meta-programming doesn't require the same skillset as writing a compiler. I'm certain you or any other capable software engineer, would be able to write a macro that parsed JSON, interrogated the typing of that JSON, and spat out a typespec with some knowledge of Elixir `defmacro`, `quote`, and `unquote`.
> Built in does not mean "I might hypothetically be able to write my own macro that does this".
Never made such a claim. My claim is that the JSON typing mechanism isn't really something that's dependent on the Typescript language and toolchain. As with my link on my last post of inline heex components, it's possible to take a text representation of something in Elixir and transform it into something that's available to the compiler (and hence also available to Dialyzer).
> At best a subset of functionality than an analogue. This kind of research is being conducted by experts in type theory outside of Elixir. If it's a problem for Elixir, it's a problem for any other language that would attempt it, and absolutely I would defer to the authority of those experts who have spent years looking at type theory.
Ironically the researcher that you're deferring to (Giuseppe Castagna) wrote a paper about the set theoretic types Elixir is hoping to implement one day, and he points to TypeScript as a practical existent example of these types. This is the same paper Elixir's creator recommends as an introduction to the research [1]. The paper is called "Programming with union, intersection, and negation types" [2]. Here's a quote from the conclusion of the paper (page 55):
"...the need of set-theoretic types naturally arises when trying to fit type-systems on dynamic languages: union and negations become necessary to capture the nature of branching and of pattern matching, intersections are often the only way to describe the polymorphic use of some functions whose definition lacks the uniformity required by parametric polymorphism. The development of languages such as Flow, TypeScript, and Typed Racket is the latest witness of this fact."
You might also want to take a look at the discussion that took place on HN about the paper when it was published [3].
> wrote a paper about the set theoretic types Elixir is hoping to implement one day
The point is, if a solution already exists, it's implementation details would simply be imitated and no further research would be required. The SOTA in set-theoretic types is not sufficient to capture typing for guards, which last I heard was a focus of Castagna's research specifically for Elixir, though the sophisticated typing of Elixir may also present other significant challenges.
Again, type narrowing isn't related to pattern matching on guards. As far as I'm aware, pattern matching on function signatures is impossible for Javascript as it doesn't even permit function overloading - though Typescript could do name-mangling like C++ does to get around this. Guards allow you to overload functions based on the shape of the data, and guards can be quite complex when chained together (e.g., 'and', 'or').
But to refer back to the stated project goals, its aims are more ambitious than the SOTA:
> - The Elixir team wants to go beyond type annotations and make type signatures a direct part of the language in a way to add new idioms and expressive power to the engineering teams that opt-in. This should open new perspectives in the theory of typing and the application of bi-directional typing.
> - The language and runtime of Elixir have a distinct focus on fault-tolerance, concurrency, and distribution, which can lead to interesting typing considerations and modelling
> The point is, if a solution already exists, it's implementation details would simply be imitated and no further research would be required.
Imitating ideas from one language to improve another is a form of research, and can be fairly difficult.
> As far as I'm aware, pattern matching on function signatures is impossible for Javascript as it doesn't even permit function overloading
There's no syntactical support for function overloading in JS, but it's fairly trivial to accomplish the same semantics using normal control flow (i.e. conditionals). This is why type narrowing (which uses control flow analysis), is relevant here. The paper I referred to above actually points this out. On page 6 under the section labeled "FUNCTION OVERLOADING", the author wrote:
"However, intersection types let us express ad-hoc polymorphism (i.e., function overloading) if coupled with some mechanism that allows functions to test the type of their arguments."
The author then goes on to describe this technique in more detail on the very next page:
"OCCURRENCE TYPING: Occurrence typing or flow typing [59, 46, 17] is a typing technique pioneered by Typed Racket that uses the information provided by a type test to specialize the type of some variables in the branches of a conditional...This method of refining types according to conditionals is important in type systems for dynamic languages and in those that enforce null safety: some examples include Ceylon [38], Dart [27], Flow, Kotlin [37], Typed Racket, TypeScript, and Whiley [47]."
> But to refer back to the stated project goals:
Thank you for linking to the list of project goals. The second bullet point under the list of goals says:
"To capture some programming patterns used in Elixir programming one needs to resort to the techniques of gradual typing, parametric polymorphism, and occurrence typing, three tools whose theoretical aspects have been already studied for semantic subtyping".
TS of course exhibits gradual typing and parametric polymorphism (generics), but did you notice the mention of occurrence typing? The same form of typing I referred to above, and which the author explicitly mentions exists in TypeScript (amongst others).
I honestly don't have the time to devote to this conversation any longer. I will most likely allow you to have the final word, because this discussion could carry on ad infinitum and is incredibly unproductive. As Paul Graham once said this might be taken as a tacit admission of failure [1], but I have things to do.
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Elixir/Erlang might already be able to do something like this with metaprogramming. It’s certainly possible to generate and run Elixir at compile time, and map types are already a good superset for JSON objects, so a compile time JSON to map could then provide an inferrable type. I think someone more savvy with Elixir would know more. I’d certainly not something that I’ve needed.
> But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions?
Strictly speaking, Dialyzer has two big benefits over more traditional type systems:
1. It doesn't require type definitions to be present, it can type check code having none at all.
2. "Dialyzer is never wrong", i.e. it never returns false positives, i.e. it's an underapproximating checker.
These are the design decisions taken by the tool authors. There's rationale behind them, though the experience of using checkers for other languages might not be 1-to-1 applicable because of that. These decisions come with downsides, of course, some of which are covered in this thread, so I won't repeat them, but in general they were taken deliberately due to characteristics of the language(s) (it was originally created for Erlang, not Elixir) and the state of type system theory at the time. Please bear in mind Dialyzer was created in 2006, some 6-7 years before TypeScript.
Is this coming from experience? Because it surely doesn’t match mine