I'm a little tired of this comparison and this point. Its fine if you like Brave New World more than 1984. But does this need to be mentioned every time Orwell is mentioned? Orwell wrote a lot more than 1984 and Animal Farm.
Not to mention the huge posthumous bump that Rorty got for being labeled "The Philosopher who predicted Trump." There was even a new collection of his essays out in 2022 [0].
> MacIntyre romanticizes ancient communities and traditions, but ignores the fact that plenty of those upheld horrifying practices
What makes you think that? A huge part of After Virtue (basically the whole part, after the initial diagnosis of where we are now and how we got here) is about how to construct and understand communities that might provide a shared idea of human good without simply going back to an Athenian idea of what that looks like. In fact if I were to summarize the book in a nutshell I would argue its an attempt to rehabilitate Aristotelian ethics without simply accepting Aristotle's own moral percepts.
Does the article he links to towards the end of the article address your concerns?
> Without complex input spaces, there's no explosion of edge cases, which minimizes the actual benefit of PBT. The real benefits come when you have complex input spaces. Unfortunately, you need to be good at PBT to write complex input strategies. I wrote a bit about it here...
Man I wish I was as good at pixel art as those people apparently are!
That's part of why I made my app into a platform where you can build apps but also share apps, libraries, and game assets. I can't make a game all on my own! I'm hoping people make and share spritesheets with this, after someone makes a spritemaker app of course. I'll keep working on my basic one.
Micro is nice because it is a single-file, stand-alone executable that has mouse support, macro record/playback and syntax highlighting. (I haven't checked Nano recently). It is great for making quick edits to json configs, shell scripts, python scripts, etc. Syntax highlighting and line numbering are key. If I need to make a really quick edit, it is much faster to use this than waiting for VS Code or PyCharm to load. You also stay focused. By this I mean, your eyes don't leave the terminal window that you are currently working in. This allows me to more quickly complete the task at hand.
Even better, run `mypy` as part of your LSP setup and you don't even need to wait to run `mypy` to see type errors! If I make a mistake I want to be notified immediately.
I would love to read the steel man case for dynamic typing, but "static typing is for people who need therapy" isn't doing it for me. Anyone have something to recommend so I can understand the intended benefits of dynamic typing?
> I would love to read the steel man case for dynamic typing
It's very simple: sometimes lower development effort matters more than correctness. Not all software has to run in a hostile production environment to be useful.
That would seem to be begging the question to an extent. Why does dynamic typing lead to lower development effort? I mostly write Python and make heavy use of type hints. With LSP set up, mypy informs me immediately of any potential type errors which makes development way easier for me.
Just saying "dynamic typing is easier" doesn't do it for me without further qualification since that statement doesn't conform to my own experience.
How much is maintaining your type system really helping you out? I find I spend more time tracking down irrelevant type warnings than fixing type issues. For example if you declare a type hint on a function parameter as `foo: int = None` you will get a type error. It says that a parameter of type int can't have a None value. This is false. So now I have to update my declaration to be `foo: Optional[int] = None`. This yields no value because when you say `= None` you are saying that this is an optional argument. The more you tighten your type declarations the more you will be chasing non-existent issues.
> For example if you declare a type hint on a function parameter as `foo: int = None` you will get a type error. It says that a parameter of type int can't have a None value. This is false.
No, it is correct. The value None is not with the domain of type int, a parameter that can take the value None does not in fact have type int.
> This yields no value because when you say `= None` you are saying that this is an optional argument.
When you provide any default value, you are making the argument optional, but that's an orthogonal concern to the Optional[T] type, which doesn't mean “optional argument", it is simply a more convenient way of expressing Union[T, None], though with the modern type syntax, T | U is generally more convenient than either Union[T, U] or Optional[T] for U=None.
No offense, but this sounds like user error. I rarely have irrelevant type warnings. If I do, it suggests something is wrong with my design.
If you declare a function parameter as `foo: int = None`... that is just an incorrect declaration. Of course a variable annotated as `int` can take a `None` value, but that is because any variable can take any type in Python. Within the Python type (annotation) system it is simply the case that an `int` and an `int | None` are two different things, as they are in other languages (eg Rust's `T` vs `Option<T>` types).
Mypy used to support the "implicit optional" feature you describe but now you must make nullable arguments explicitly optional. This is in line with Python's "explicit is better than implicit" design philosophy. In any case, how long does it take you to just type `foo: int | None = None`? Or you could re-enable the old behavior to allow implicit optionals with `--implicit-optional` or the corresponding config file option. It seems like you just need to configure mypy to match your preferences rather than fighting with its defaults.
To return to the broader point, I'm unsure what an "irrelevant type warning" is, but I suspect that has something to do with my lack of appreciation for dynamic typing. Can you give an example that isn't just a complaint about typing an extra 6 characters or about mypy being misconfigured for your preferences?
No offense taken. I've had a long career with many different phases. I had a phase where I built palaces of types bordering on DSLs. I ended up building myself a straight jacket. I wasn't solving the customer problem, and I was only slowing myself down. Software is an engineering discipline. Every situation needs to be critically evaluated. Building a life-critical single point of failure component is much different than building a road over a culvert. In early languages you needed to specify types so the compiler knew how much space to allocate for storage. In languages like Python, type hints are more like documentation. It helps the reader understand how to use your code. It can be used for correctness. However if correctness is a primary criteria driven by engineering requirements I'd probably consider another language and accept that it is going to cost more and be slower to develop.
No, I was correct. Briefly, my question was "why is static typing good?" and the answer given was "static typing is good because it makes development easier." To the extent that "good" here just means "makes development easier" (and I think that is a lot of what "good" means in this context) then the answer I received was question begging to precisely that extent. Which is why I said "... to an extent." The conclusion was not quite assumed but a pretty similar conclusion was.
I can see how it appeared that I was using the phrase in the incorrect way! That usage bothers me too, and I am attentive to it.
> Why does dynamic typing lead to lower development effort?
Because you can run your program to see what it does without having to appease the type checker first.
There is nothing wrong with presenting type hints or type errors as warnings. The problems arise when the compiler just flat-out refuses to run your code until you have finished handling every possible branch path.
I'm against anything that adds cognitive load without a compelling reason, anything makes me do unnecessary work. Typing "todo!()" is not a huge burden, but it's not zero, and if the Rust compiler is smart enough to fill in the hole it should be smart enough to do it without my having to explicitly tell it.
No, because 99.9999% of the time, you explicitly do not want the compiler implicitly filling in a hole like that, and do want the compiler to tell you that you've forgotten an entire branch of control flow. Typing `todo!()` to make your intent explicit is among the least obtrusive things imaginable.
> No, because 99.9999% of the time, you explicitly do not want the compiler implicitly filling in a hole like that
Who are you to tell me what I want? You are making all manner of tacit assumptions about the kind of work I do and what my requirements are. I absolutely do want the compiler filling in every hole it possibly can. For me, that's the whole point of using a high-level language in the first place. For the kind of work I do, what matters most is getting things done as quickly as possible. Correctness and run-time efficiency are secondary considerations.
If correctness is a secondary consideration, then you'll be happy to learn that LLMs will let you code in plain English, so there's no need to bother with using a programming language at all. The LLM will happily fill in all the holes it encounters, more eagerly than any compiler would ever dream of.
But for people who prioritize precision and correctness, that's what programming languages were invented for.
There are two problems with using LLMs for writing code. The first is that when the code they produce doesn't work (which is most of the time) I still have to debug it, and now I'm debugging someone else's (or someTHING else's) code rather than my own. The second is that the work I do is very specialized (custom ASIC design) and LLMs are utterly useless.
But yeah, if all you need to do is build a vanilla app then an LLM is probably an effective tool.
What is an example of a compiler that flat out refuses to run (compile) your code? Obviously Python is not an example. The other language I know best is Rust, where as I understand the compiler doesn't refuse to compile your code, it cannot compile your code. Is there a language where the compiler could compile your code but refuses to do so unless the types are all correct?
Not that you should ever write this in any language, but as an illustration:
fn main() {
let x: i32 = if true {1} else {"3"};
println!("{}", x);
}
This will not compile even though if it were allowed to execute it would, correctly, assign an integer to x. Python will happily interpret its equivalent:
x = 1 if True else "3"
print(x)
Even giving the if-expression an explicit `true` constant for the condition, Rust won't accept that as a valid program even though we can prove that the result of the expression is always 1.
> the compiler doesn't refuse to compile your code, it cannot compile your code
"Can not" and "will not" are kind of the same thing in this context. It's not like compilers have free will and just decide to give you a hard time. It's the language design that makes it (im)possible to run code that won't type-check.
1. Dynamic typing says "anything I can do these operations on, I accept as input". This lets you accept input from code that doesn't have something that fits the strict definition of what input you were expecting.
2. Dynamic typing lets you change what shape your data is without having to change type annotations everywhere.
Problem is, I think both of these are deeply flawed:
1. If you're writing something nontrivial, you probably have several layers of nested function calls, some of which may be calls to libraries that are not yours. If you're saying "anything I can do these operations on, I accept", it becomes very difficult to say what the full extent of "these operations" are. Thus it becomes hard to say whether the caller is passing in valid input. You can "just try it", but that becomes hard if you care about it working in all cases.
2. Refactoring IDEs are a thing these days. You want to change the type signature? Press the button. Even better, it will tell you everything you broke by making the change - everywhere where you're doing something that the new type can't do. Without types, sure, you can just change it without pressing the button. Now try to find all the places that you broke.
It may be possible to construct a better steelman than I have done. For myself, even trying to steelman the position, I find it incredibly unconvincing.
"Static typing is a powerful tool to help programmers express their assumptions about the problem they are trying to solve and allows them to write more concise and correct code. Dealing with uncertain assumptions, dynamism and (unexpected) change is becoming increasingly important in a loosely couple distributed world. Instead of hammering on the differences between dynamically and statically typed languages, we should instead strive for a peaceful integration of static and dynamic aspect in the same language. Static typing where possible, dynamic typing when needed!"
But you can achieve #1 with typing.Protocol in type-annotated Python and traits in Rust. Fitting the "strict definition" sounds like nominal typing but you can opt in to explicit duck typing or structural typing while still being typed. (Someone correct me if I'm using these terms incorrectly.) In short you can still encode a lot of flexibility with types without just abandoning them alltogether.
And with #2, you can get that with static typing too... Let's say a method accepts an instance of an object `Foobar`. I can change the definition of `Foobar` ("change what shape [my] data is") without having to change type annotations everywhere.
I agree with you, I guess, that I find the steel man position unconvincing.
Hard to conceive of a case where that would occur. Can you think of one?
The implication of what you're saying seems to be that if you're concerned about some kind of correctness you should be writing unit tests anyway and not being so fussed about type checking in a language like Python. I suppose if you are strictly following TDD that might work, but in all other cases type checks give you feedback much more quickly than unit tests ever can. I guess I don't understand.
Any old math with consistent types and incorrect calculation? (If only we'd diff'd the output with some known result.)
No doubt we're at cross purposes.
I'm a little surprised that you haven't mentioned — "Program testing can be used to show the presence of bugs, but never to show their absence!" (or existential versus universal quantification).
Your type system is sound or complete (cannot be both, if it's a Turing-complete language). If it's dynamically typed, then it's probably complete but not sound (many, but not all, dynamically typed language implementations have some basic static type checking). If it's statically typed, it can be either sound or complete. The difference between sound and complete is: Sound type systems will reject valid programs that they cannot prove are valid; Complete type systems will reject only those invalid programs they can prove are invalid.
In practice, the choice is more one of default these days:
Do you default to static typing and allow some dynamic (runtime) type checking? C++, C#, Java, etc.
Do you default to dynamic typing and allow some static (compile-ish time) type checking? Python, JS, Common Lisp, etc.
This lets both sides have what they really want. They want to be able to express all valid programs (a complete type system) but they want to reject all invalid programs (a sound type system). They can't have it. So you end up choosing a direction to approach it from. Either start conservatively and relax constraints, or start liberally and add constraints.
If you accept that, then the case for dynamic typing is that it's a choice to move from the too permissive extreme towards a stricter position. For me this works better (in general, though I also happily use statically typed languages like Ada) because I find it easier to add constraints to a relaxed system than to remove constraints from a restrictive system.
So you're saying it goes up to 11x?