That might be true at the start of the project. But to be honest, having worked quite a few years with strong type systems, weak type systems and dynamic type systems, I've found that not having a strong type system is precisely what makes work grow faster than expected. Maybe it's because I'm accustomed to big projects, but I've been bitten too many times. I'd even choose Java over any language that doesn't have a dependable type system.
I have worked with strong and no type systems, and have found that the absence of a type system is extra work.
Instead of the compiler telling me exactly what something is, what I can do with it, and if it makes sense to pass it to some function, I have to figure all of that out myself, greatly slowing me down.
Edit: It occurs to me I have yet to hear an answer to my question of how types hinder anything, especially if they are inferred. The only exception is the trivial requirement of sometimes having to cast between numeric types.
I really like type systems. I think that if you take the time to learn about type theory, then you are more likely to create better solutions (both with code and in life in general).
However, it isn't free. Type theory is a kind of math that most people have very little exposure to, so there's going to be a lot of work in order to start becoming proficient.
Additionally, there's more than one type of type theory. Are you using a system F like system? Are you going to have sub-typing? Is it going to be structural sub-typing? Maybe you want row polymorphism. Is there going to be a kind system? What about higher order poly kinds? Dependent typing? Intensional or extensional?
Additionally, there's more than one type of implementation of these type systems. Ocaml functors ... is it a generative or applicative functor? Haskell ... are you using gadts, functional dependencies, or type families?
In the end I think that type systems will eventually be able to get you a completely typed experience that feels exactly like a completely dynamic experience, but with compile and design time checks that always make you feel good about the experience. However, I don't think we are quite there yet and I don't think you can expect everyone to be able to take the time to get sufficiently familiar with an arbitrary type system in order to be productive with it.
Yeah, you basically make initial implementation easier at the expense of maintenance and debugging, which sounds like a good tradeoff until you think about the relative amount of time you spend doing one thing vs the other.
Do you realize how insane that sounds? Static typing makes programs harder to debug? Harder to maintain??
On the contrary, static typing helps debugging and maintenance: changes that break invariants are more likely to be caught by the type system.
This speak of tradeoff sounds wise on the surface, but this is hardly a tradeoff at all. For many people (including me), a good static type system makes prototyping and maintenance easier.
> Yeah, you [DarkKomunalec] basically [use static typing to] make initial implementation easier at the expense of maintenance and debugging
It was not clear that "Yeah" was an approval (and not a dismissal), and it was not obvious that "you" was a general "you" (and not a personal "you" directed at DarkKomunalec).
Nevertheless, you were still talking about a tradeoff, and I personally see none: in my experience, dynamic typing makes initial implementations harder (or longer), because so many errors are caught too late. Static type systems have a much tighter feedback loop.
Many, many people have a very different amount of time spent doing each that what you imagine. There's a lot of scripting code that is written once and then never maintained.
A normal person doesn't care. He just wants, and expects, 2+2.5 to yield 4.5. He doesn't want to use a cast, or write the 2 as 2.0, or use some sort of baroque type conversion procedure, or anything like that.
This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.
So does anyone who's done math. They also know 3/5ths is different. It's not unreasonable to ask for addition to be defined in a reasonable way though.
> This answer is not Python-specific, of course, but it's a good example of the overhead that gets introduced when a language becomes too type-happy.
Besdies OCaml, who actually does this for general programming? I can't think of many examples at all.
P.S., "A normal person doesn't care. He just wants". Stop this. The community here might give you a pass for being tedious and correct. Being tedious and incorrect is pretty much unforgivable.
Hmmm... I could have had an undergrad math degree in addition to the CS degree if I'd stuck around one more semester, but decided to head off for CS grad school instead. And yeah, I understand cardinality, and why there are more real numbers than integers (and could even write the proof for you from memory).
I also completely understand that 2.5 in a computer isn't actually represented as a non-integral real number, or anything like it. The computers we have now can't represent arbitrary real numbers (quantum computers can, I think, but I haven't studied those in any great degree). At one time I even wrote some non-trivial asm programs that ran on the 80x86 FPU, but I'd have to do a fair amount of review before doing that again.
So yeah, I'd say I've both "done some math" and have a good handle on how integers and floats are represented in a computer.
That still doesn't mean I want to have to put in a freakin' type cast when I add 2 and 2.5 on a pocket calculator. Nor does anyone else.
Answer my question. I'm not going to defend a non-existent problem.
Or is this about Pascal again? Did ocaml bite you and you still have a mark? I'm trying to give you an opportunity to suggest this isn't a straw man. My most charitable hypothesis is that you really don't know much about modern static and strong typing techniques.
Everyone's numeric tower accounts for this and does sensible (if not optimal) type conversions. The best of the bunch give you fine grained control on what happens when. That something must happen is inescapable.
I'll bet he starts to care if floating point arithmetic introduces errors into his results. You can only push off the complexity for so long if you want to do things that aren't trivial.
Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?
"You can only push off the complexity for so long if you want to do things that aren't trivial."
There are a lot of things that aren't "trivial" that nonetheless don't require a totalitarian type system.
> Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?
Having your share of holiday costs come out as NaN is fiddlier than getting an exception at the point where you actually divided by zero.
The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one. Excellent way to get your milkshake drunk, that.
Sure, the point is that using integer arithmetic for integer calculations gets you better error reporting that saves you time when tracking odwn other bugs.
> The "type safe" guys like to pretend that their approach can catch all that stuff at compile time. It does catch a certain class of error, but at the cost of making the code take much longer to write. That doesn't work in a world where your competitor is iterating five times while you're still building the first one.
My experience is that I can iterate a lot faster if the compiler's able to help me catch errors faster. It doesn't slow down writing the code; almost anything I'd write in say Python translates directly into the same Scala. (I guess the language pushes me into defining my data structures a little more explicitly, but that's something I'd want to do in Python anyway if only for documentation reasons).
Isn't the biggest non-programmer audience who has any interest in writing Python scripts scientists? I don't think it's totally contrived to imagine that floating-point precision is an issue in such cases.
I'm really not a fan of this argument. No one is arguing for a banishment of the concept of a competency PMF. We're just saying, "If you use these newer techniques and tools and patterns, you get more bang for your buck."
The common response is, "But then I have to learn something new." But this is the curse of technology, and pretty much inevitable. Some learning needs to occur because tech changes so fast.
"If you use these newer techniques and tools and patterns, you get more bang for your buck."
But you don't, necessarily. Dealing with type wankery takes time. And no, it has nothing to do with "learning something new". Languages that tout "type safety" have been around since at least Pascal (47 years old)... arguably even before that, with some of the Algol variants.
Yet they've never made it to mainstream acceptance. It's not even about hype -- Pascal was pushed HARD, by just about every university. Where is it now? Turbo Pascal had a small degree of success, but that's only because it chucked a lot of real Pascal's rigidity out the window.
So... Can you clarify this? "Pascal is no longer popular. Pascal had a static type system. Therefore, static typing has failed?"
If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming. TypeScript is the rapidly growing language, Haskell's never been more boring to use, Scala's edging out Clojure even though it has very fragmented community leadership. C++ has adopted a lot of powerful new features and you're seeing functional programmers speaking at C++ conferences because the folks there can use it. Java has more capable and composable lambdas than Python.
Systems not using these techniques are plagued by security flaws, while those that are work on stabilizing performance under various workloads.
It's never been a better time to be using a strong, static type system..
"Can you clarify this? "Pascal is no longer popular. Pascal had a static type system. Therefore, static typing has failed?""
I would make it "Pascal was never popular", but yes.
"If so, I counter: the last 3 years have been a series of breakthroughs both in terms of technology and social acceptance of typed programming."
This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.
As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.
> I would make it "Pascal was never popular", but yes.
Then why... why bring it up? Should I discount all of dynamic typing because Io and Pike never took off? C++ did stick around, Oak became Java. APL is still in active use.
> This isn't the first rodeo for many of us. "Compile-type static type checking will solve all of our problems" is an idea that's come around repeatedly. Outside of a few niche applications, it never works, or even catches.
"It never works" is a pretty bold claim given that the majority of code you interact with on a daily basis has SOME sort of type system. I'd say C++ is better endowed than most in this dimension.
> As for the supposed booming popularity of TypesScript... dude, TypeScript doesn't even make the top 30 on GitHub. It's less popular than assembly language and Visual Basic.
My dude it would be extremely suspicious if it did. Instead, look at the growth stats: https://octoverse.github.com/. It's the fastest growing language that has non-trivial presence on github (and of course, that's the correct way to phrase it, a new language appearing can appear to have quadruple-digit growth percentile).
This seems profoundly disingenuous. Is that your intent?
Yeah, I have. Java is notorious for tossing 30 page runtime exceptions all over the place. Given that the alleged advantage of getting type happy is to prevent runtime errors, can you explain how Java actually supports your case?
Are you mad at the existence of stack traces? Would you prefer it if the errors were unsourced? Are we pretending Python does it better or even that differently?
As for "the case", Java does reduce the # of NPEs you get by not letting you deref invalid keys on objects, and it makes it easier to handle objects in the abstract.
Well, for one, the post I was responding to claimed that no language touting type safety had ever caught on, and yet there is Java, likely the most used programming language on Earth next to C and C++ (which themselves offer some level of type safety).
But moving on to your claim in this post, nobody ever said "compile-time checks eliminate errors altogether." What they do do is reduce errors and completely eliminate certain classes of errors. They also make maintenance and debugging much easier because they define clear expectations for the interfaces of their arguments and return values. The length of stack traces is a completely orthogonal concern.
Yes, everything looks like it works, but occasionally it's completely wrong. I don't think just chugging along is desirable in all circumstances. But even if I accept your premise, that just says to me that Python is a good choice for people who don't really know how to program and don't care to learn too much.
Yes, we had non-trivial floating point errors in Level that appeared after multiple divide operations in an early version of the product. We stopped using it.