Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

TL/DR: Haskell makes you add more meta information to the code, so that compilers can reason about it.

One of the examples in the article:

Python:

    def do_something():
        result = get_result()
        if result != "a result":
            raise InvalidResultError(result)
        return 42
Haskell:

    doSomething :: Either InvalidResultError Int
    doSomething = 
        let result = getResult
            in if result /= "a result"
                then Left (InvalidResultError result)
                else Right 42
Personally, I prefer the Python version. In my experience, the benefits of adding meta information like types and possible return values are very small. Because the vast majority of time fixing problems in software development is spent on systematic bugs and issues, not on dealing with type errors.


The point is that the Haskell type system is an expressive way of solving systematic bugs. You can express both the data itself and the valid ways of handling it using types, which gives you a space to design yourself out of systemic problems. And when something goes awry, the strict typing and control over side effects means that you can refactor fearlessly!

Re. the example, the compiler can infer types, and you can write almost the exact same code in Haskell as in Python:

    doSomething = do
       let result = getResult
       when (result /= "a result")
            (throwError $ InvalidResultError result)
       return 42
But as it has been noted, you are unlikely to find code like this in Haskell. An element of Either type at the top level, without arguments is either always going to be Left or always going to be Right, so it is a bit pointless. Since this example is so abstract it is hard to see what the idiomatic Haskell would be, because it would depend on the particulars.


Your criticism seems more general than just Python and Haskell. It's really about dynamic and static typing. That's a legitimate debate, but as far as static typing goes, Haskell has one of the best static systems around - and because of that, idiomatic Haskell is unlikely to look like the example you posted.


> so that compilers can reason about it

Actually this is the wrong takeaway, I think it's so that programmers can reason about it.

This isn't about type errors, it's about precisely describing a particular computational expression. In the python example, it's very unclear what `do_something` actually _does_.


> the vast majority of time fixing problems in software development is spent on systematic bugs and issues, not on dealing with type errors.

You can make "systematic bugs and issues" a type errors, then deal with them as type errors.

If you can confuse between meters and seconds expressed as Double and erroneously add them, wrap them into types Meters and Seconds, autogenerate Num and other classes' implementations and voila, you cannot add Meters and Seconds anymore, you get type error.

I do that even in C++, if I use it for my personal projects. ;)

But where Haskell really shine is in effect control. You cannot open file and write into it during execution of transaction between threads. If you parse text, you also can restrict certain actions. There are many cases where you need control over effects, and Haskell gladly helps there.


> TL/DR: Haskell makes you add more meta information to the code, so that compilers can reason about it.

Actually Haskell lets you to add more meta information to the code, similar to modern Python or TypeScript. Type information is optional. But you might want to add it, it is helpful most of the times.

In the example, doSomething implicitly depends on getResult, which doesn't show up in the type information, so the type information only tells you how you can use doSomething. To know what is doSomething, you actually have to read the code :\


I'm not sure that's entirely true (I wrote the examples): the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure) and you don't have to worry about what some nested function might throw or some side-effect it might perform.


> I'm not sure that's entirely true (I wrote the examples)

Which part?

> the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure)

I think you failed to demonstrate it, and more or less demonstrated the opposite of it: the type signature of doSomething does not show its implicit dependence on getResult.

In Haskell you can do

  foo :: Int
  foo = 5

  bar :: Int
  bar = foo + 1
(run it: https://play.haskell.org/saved/hpo3Yaef)

which your example does. In this example bar's type signature doesn't tell you anything about what bar 'consumes', and it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse.


> Which part?

This part: "the type information only tells you how you can use doSomething. To know what is doSomething, you actually have to read the code :\" I think we're disagreeing on something quite fundamental here, based on "it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse."

(Although I am certainly open to the idea that "[I] failed to demonstrate it".)

A few things come up here:

1. Firstly, this whole example was to show that in languages which rely on this goto paradigm of error handling (like raising exceptions in python) it's impossible to know what result you will get from an expression. The Haskell example is supposed to demonstrate (and I think it _does_ demonstrate it) that with the right types, you can precisely and totally capture the result of an expression of computation.

2. I don't think it's true to say that (if I've understood you correctly) having functions call each other is bad for code re-use. At some point you're always going to call something else, and I don't think it makes sense to totally capture this in the type signature. I just don't see how this could work in any reasonable sense without making every single function call have it's own effect type, which you would list at the top level of any computation.

3. In Haskell, functions are pure, so actually you do know exactly what doSomething consumes, and it doesn't matter what getResult consumes or doesn't because that is totally circumscribed by the result type of doSomething. This might be a problem in impure languages, but I do not think it is a problem in Haskell.


> In this example bar's type signature doesn't tell you anything about what bar 'consumes'

Yes, it does: `bar` in your example is an `Int`, it has no arguments. That is captured precisely in the type signature, so I'm not sure what you're trying to say.


I prefer the Haskell version. One can read it as a full sentence instead of being interrupted by the boring imperative Python version that breaks the train of thought on every line.


Ok, your Haskell example is basically drawing your opponent in the wojack meme and declaring yourself the winner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: