For me, the reason why I abandoned Haskell (for a couple years I was writing about half of third projects in it) is the complexity associated with laziness. FP and immutable data structures and monads are cool and mostly understandable once you grasp the concepts, but laziness is a double-edged sword. Laziness is cool, it enables a lot of nice things, but it makes me unable to easily reason about space complexity of any nontrivial algorithms, and that repeatedly bites me when I have to write such code.
In imperative code, space complexity is obvious and explicit, time complexity is intuitive for me, and correctness is shaky and needs to be tested.
In Haskell, correctness tends to be obvious (if there are no typos causing syntax to fail, it almost always gets the exact result I intended 100% correctly in the first try), but the space and time taken by the algorithm may be and often is surprising to me; If I make tiny modifications to the code, the execution may suddenly explode from 0.001 second to an hour because suddenly processing n entries involves creating and disposing n^2 thunks, taking all available memory and extreme amounts of time.
And despite trying a bunch to wrap my head about it, it keeps happening to me, it's just not intuitive to me - for me, eyeballing efficiency and exact time/space execution of a nontrivial Haskell function is just as hard as eyeballing whether random C code does everything correctly without a memory leak or overwriting being possible in an edge case. I also have trouble with Prolog for the same reason.
In imperative code, space complexity is obvious and explicit, time complexity is intuitive for me, and correctness is shaky and needs to be tested.
In Haskell, correctness tends to be obvious (if there are no typos causing syntax to fail, it almost always gets the exact result I intended 100% correctly in the first try), but the space and time taken by the algorithm may be and often is surprising to me; If I make tiny modifications to the code, the execution may suddenly explode from 0.001 second to an hour because suddenly processing n entries involves creating and disposing n^2 thunks, taking all available memory and extreme amounts of time.
And despite trying a bunch to wrap my head about it, it keeps happening to me, it's just not intuitive to me - for me, eyeballing efficiency and exact time/space execution of a nontrivial Haskell function is just as hard as eyeballing whether random C code does everything correctly without a memory leak or overwriting being possible in an edge case. I also have trouble with Prolog for the same reason.