Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Once you go functional, you can never go back (zx.rs)
32 points by vsakos on May 18, 2015 | hide | past | favorite | 93 comments


The "six distinct features of a functional language" are misleading/inaccurate:

1. Laziness: Not required. See Lisp, Scheme, ML, OCaml, etc.

2. Functions as first-class citizens: This is probably the only hard and fast requirement.

3. No Side Effects: Not required. Again, see Scheme, ML, and Rust.

4. Static Linking: Certainly not, but the author seems to mean static binding, which is more important. However, a functional language doesn't actually need any binding aside from function invocation (see Lambda Calculus). `Let` bindings are generally available and very useful.

5. No explicit flow control: Function invocation is flow control. Loops aren't functional, but some functional languages have them.

6. No commands/procedures: If the author means "no top-level function definitions" that is clearly not true. Some functional languages even have macro languages.

This article gives the (incorrect) impression that functional programming is about a set of restrictions you must follow 100% of the time. Functional programming is a style that can be used in any language, as long as you can pass functions around as values. It wasn't pretty, but Java <=1.7 could still support a functional programming style by using `Callable` objects.

The `map` and `reduce` operations are certainly possible in imperative languages. Python has them built-in, they can be written in C++, and so on.


There isn't consensus that Lisp, Scheme, ML, and O'Caml are functional. Period. There certainly is consensus everywhere that they aren't purely functional. Purely functional doesn't require nor implies lazy, but the two tends to go together for good reasons.

"Functional programming is a style that can be used in any language"

No, I'm sorry, it's not. [Purely] Functional programming provides guarantees and properties that are only valid if the necessary discipline is enforced. Your list of languages are merely procedural languages with functions.


They don't completely apply even for the languages the author talks about.

Those are more a list of Haskell features, if you replace static binding by dynamic binding. Even then, Haskell lets you kinda of scape most of those rules, except by #6.


I feel a bit sorry for people getting introduced to "functional programming" from the imperative world. The plethora of language features available in FLs distracts them from the main endeavour of FP - which is modeling your problem domain in terms of function composition.

With that perspective, one can do FP in pretty much any language and reap all of its conceptual and engineering benefits. Without that perspective, one can end up completely avoiding functional modeling of the problem domain at hand when using a language tailored to FP and end up faring no better than when using a language encouraging imperative style.

FWIW ..

> Fibonacci sequence in LISP-oid Scheme

.. has exponential complexity whereas the C version runs in O(n).

    long fib(long n) {
        return n < 2 ? n : fib(n-1) + fib(n-2);
    }
.. is the C equivalent of the same badly performant fib implementation.


Here's a memoized Fibonacci in Haskell

    fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
This is essentially equivalent to

   long memotable[]; // filled with -1, with first elements 0 and 1
   long fib(int n) {
       if(memotable[n] != -1)
           return memotable[n];
       memotable[n] = fib(n-2) + fib(n-1);
       return memotable[n];
   }
Edit: Actually, the Haskell version can be O(1) in memory as the front of the list can be garbage collected, while the C++ version is O(n) in memory.


> Edit: Actually, the Haskell version can be O(1) in memory as the front of the list can be garbage collected, while the C++ version is O(n) in memory.

Do you expect the following code to run in constant space?

    head (drop 1000000000 (let fibs = 0 : 1 : zipWith max fibs (tail fibs) in fibs))
Neither ghci nor ghc seems to run it in constant space.


That's because the entries in fibs are thunks which become very large. This runs in constant space:

    zipWith' f (x:xs) (y:ys) = let r = f x y in r `seq` r : zipWith' f xs ys
    head (drop (10 ^ 9) (let fibs = 0 : 1 : zipWith max fibs (tail fibs) in fibs))


Nice. .. and you meant to type zipWith' on the second line.


Yeah I did!


That's totally academic anyway: With 64-bit integers, you can only go as high as n=92 (aka floor 64/lb φ) before overflowing.


With Haskell, you aren't limited to 64-bit.

    Prelude> let fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
    Prelude> :set +s
    Prelude> fibs !! 10000
    33644764876431783266621612005107543310302148460680063906564769974680081442166662368155595513633734025582065332680836159373734790483865268263040892463056431887354544369559827491606602099884183933864652731300088830269235673613135117579297437854413752130520504347701602264758318906527890855154366159582987279682987510631200575428783453215515103870818298969791613127856265033195487140214287532698187962046936097879900350962302291026368131493195275630227837628441540360584402572114334961180023091208287046088923962328835461505776583271252546093591128203925285393434620904245248929403901706233888991085841065183173360437470737908552631764325733993712871937587746897479926305837065742830161637408969178426378624212835258112820516370298089332099905707920064367426202389783111470054074998459250360633560933883831923386783056136435351892133279732908133732642652633989763922723407882928177953580570993691049175470808931841056146322338217465637321248226383092103297701648054726243842374862411453093812206564914032751086643394517512161526545361333111314042436854805106765843493523836959653428071768775328348234345557366719731392746273629108210679280784718035329131176778924659089938635459327894523777674406192240337638674004021330343297496902028328145933418826817683893072003634795623117103101291953169794607632737589253530772552375943788434504067715555779056450443016640119462580972216729758615026968443146952034614932291105970676243268515992834709891284706740862008587135016260312071903172086094081298321581077282076353186624611278245537208532365305775956430072517744315051539600905168603220349163222640885248852433158051534849622434848299380905070483482449327453732624567755879089187190803662058009594743150052402532709746995318770724376825907419939632265984147498193609285223945039707165443156421328157688908058783183404917434556270520223564846495196112460268313970975069382648706613264507665074611512677522748621598642530711298441182622661057163515069260029861704945425047491378115154139941550671256271197133252763631939606902895650288268608362241082050562430701794976171121233066073310059947366875
    (0.03 secs, 15,213,608 bytes)
If you have a practical use for large Fibonacci numbers, this seems an entirely practical way of producing them.


Racket doesn't have a built-in memorization, but you can define a "define/mem" macro to handle the memorization details and you can reuse it later. This way, the example of the article runs in O(n) time without other changes:

  #lang racket
  
  (define-syntax-rule (define/mem (name arg) body ...)
    (begin
      (define cache (make-hash))
      (define (name arg)
        (hash-ref! cache arg (lambda () body ...)))))
  
  (define/mem (fib n)
    (cond
      [(= n 0) 0]
      [(= n 1) 1]
      [else (+ (fib (- n 1)) (fib (- n 2)))]))
  
  (fib 100)
But be careful, because this is O(n) in memory because the calculated values are never collected.


    (define (fib n)
      (fib-iter 1 0 n))
 
    (define (fib-iter a b count)
      (if (= count 0)
          b
          (fib-iter (+ a b) a (- count 1))))


Not sure even function composition is universal.

The ML family, for example, lets you model a lot of things in type specification, and Haskell goes even further making types a good model even for behavior. And Lisp has a deep focus on metaprograming.


You're partially right, but function composition features in the type world too. For example, when you say -

    data Tree a = Leaf a | Branch (Tree a) (Tree a)
It looks like you're dealing purely with types, however you've declared two constructors "Leaf" and "Branch" which are, in essence, functions with the following signatures -

    Leaf :: a -> Tree a
    Branch :: Tree a -> Tree a -> Tree a
Additionally, you have `Branch` being an operation that's closed on trees (i.e. maps trees to trees), which composes nicely.

On top of that, type classes are mechanisms to talk about which functions can be called on which data, hence guiding which functions can be composed with which others.


This would cause stack overflow.


It won't - but how is that even relevant? The issue is that it's exponential time.


> Since this is impossible in imperative languages, I can't give you a comparison.

I'll bite. Here's foldr() in JavaScript, written in an imperative style:

    function foldr(func, list) {
    	var r = [];
    	for(var index = list.length-1; index >= 0; index--) {
    		r = func(list[index], r);
    	}
    	return r;
    }
It isn't defined via pattern matching, but it's absolutely a higher-order function. Functional programming is a collection of ways of thinking about problem solving, and it can be applied to any language.


No idea why the author thinks it's impossible. I mean, C stdlib comes with qsort.

Here's a map in C

    void *map(void *in, void (*f)(void *src, void *dst), size_t size, size_t in_s, size_t out_s)
    {
        void *out = malloc(size * out_s);
    
        for(size_t i = 0; i < size; i++)
            f(in + (i * in_s), out + (i * out_s));
    
        return out;
    }


I like what you are doing here. Since I've been introduced to Haskell, I wondered about functional programming in JavaScript. To my surprise, JavaScript is versatile enough to support a lot of functional concepts!


All these languages have a similar enough semantic core which if you remove assignments and use mostly closures/hof you end up with a syntactically heavy[1] untyped ML (sic). {Coffee,Live}script comes to mind.

[1] remove return, curly braces .. with ES6 unpacking/pseudo-pattern-matching and let, you're half way there.


I can recommend flicking through https://leanpub.com/javascript-allonge/read -- it's a really good book on functional programming in JS. Ends up doing Ycombinators, trampolining etc.


Actually, the original version of this book is no longer for sale. The link for the updated free version is here: https://leanpub.com/javascriptallongesix/read

Or, if you want to buy it in other formats, here: https://leanpub.com/javascriptallongesix


Also, for C# IEnumerables, .Select() is map and .Aggregate() is reduce.


IIUIC, the example they give of recursively calculating the Fibonacci series is pretty much _the_ textbook example of how not to use recursion --- it's O(2^n) and doesn't use tail calls, which means it's slow and will gobble memory.

Actually calculating the Fibonacci series efficiently in a non-lazy functional language looks surprisingly non-trivial. In Haskell it's pretty simple:

http://blog.srinivasan.biz/software/fibonacci-numbers-the-sl...


The Haskell code in question:

    fibs = 1:1:zipWith (+) fibs (tail fibs)
It's also a common showcase for Perl6 lazy lists:

    my @fibs = 1, 1, * + * ... *;


Note that Perl6 uses big integers by default, so this will work even beyond the limit n=92 at which 64-bit integers will overflow.


Haskell does this, too.


I will admit perl 6 is pretty damn cool. I'll definitely have to look at it soon.


Seriously: Scheme is guaranteed to have tail-call replacement, the author should really take advantage of that.

Here's one that does:

    (define (fib-tco n)
      (let loop ((x 0)
                 (y 1)
                 (num n))
        (if (zero? n)
            x
            (loop y (+ x y) (- num 1)))))
Try getting the 1000th Fibonacci number which each of these.

Also, what's with the dangling parentheses?


Here's a page about calculating Fibonacci numbers efficiently:

http://www.nayuki.io/page/fast-fibonacci-algorithms

The method you linked to is listed there as "Dynamic programming (slow)".


> Actually calculating the Fibonacci series efficiently in a non-lazy functional language looks surprisingly non-trivial.

I don't know if you'd count it as trivial, but one could do

    def fib(n, a=0, b=1, c=2):
        if n <= 1:
            return n
        if n == c:
            return a+b
        return fib(n, b, a+b, c+1)
which beats the Haskell lazy list in being O(1) space (except Python doesn't eliminate tail calls).


The front of the list can be garbage collected, so the Haskell version can be O(1) in memory.


Ignoring growth in the size of the output itself (and the intermediary values that are output of earlier stages), anyway...


Further more, it's very hard to see the merit of FP strictly from Fibonacci.

I'd go so far as to assume most of us don't spend all day coding anything close to Fibonacci series.


I love FP, but it always seems a bit disingenuous to claim that it is "pure". Most programs raison d'être is to do IO (that is to say to have side-effects), so by construction most programs are impure.

Bjarne Stroustrup has a say about C++: you don't pay for what you don't use. I like to think about it that way for FP: you don't pay for side effects you don't use.


A program can be both purely functional, and have side-effects. See Haskell, and in particular, see [1].

[1] Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell, Simon Peyton Jones, http://research.microsoft.com/en-us/um/people/simonpj/papers...


If an effect is explicit, is it still correct to call it a "side" (implicit) effect?


Execution of Haskell has side effects, although they are left unspecified defined by the language. Memory is used, heat is emitted...

I agree that explicit effects should not be called "side" effects.


FP languages are pure to different degrees: Looking at Haskell, each time you actually want to do something (change the state of the world), you are severely beaten with a shower of monads that make side effects hell, even if the specific side effect you need is as trivial (and innocuous) as they come. That is, while you might not pay for side effects you don't use, you are unreasonably overcharged for the side effects you do use, even if they're not worth the trouble (which is why I don't use Haskell).

OCaml otoh gives you the choice to turn down the purity a bit (allowing side effects in principle) and even gives you some imperative primitives (viewer discretion is advised). It gives you the benefits of FP without the unreasonable costs that Haskell imposes. I assume other FP languages can do that, too.

[Edit: Clarification.]


> each time you actually want to do something (change the state of the world), you are severely beaten with a shower of monads that make side effects hell, even if the specific side effect you need is as trivial (and innocuous) as they come.

    readFile "/etc/issue" >>= putStrLn
Man, I feel so beaten.


Ha, just put this code into any old deeply nested function that's not already contaminated with the IO monad. Or try to do something that needs two monads at the same time.


> Ha, just put this code into any old deeply nested function that's not already contaminated with the IO monad.

Why would I want to do that? I could just pass the filepath to that function if it needed it, or if I want it to log messages I could use the writer monad.

> Or try to do something that needs two monads at the same time.

Monad transfomers aren't so hard: http://www.cs.virginia.edu/~wh5a/personal/Transformers.pdf

Edit: One of my favorite papers, I believe multiple monads are used in it: http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/ba...


Here's an explanation of how IO works in Haskell: http://blog.jle.im/entry/first-class-statements

If you go through it and understand it, you will find that it is completely "pure".


Depending on your POV, even C can be said to be purely functional!

http://conal.net/blog/posts/the-c-language-is-purely-functio...


Note that Backus investigated purity long ago (FL) and ended up banging on the IO wall too. I don't get the IO Monad really (I only grasp a tiny bit of it), but I wonder if that's the only way to define IO to fit purely functional evaluation.


I've wanted to get some clarity on the difference between effects and side effects for some time. I think side effects are effects that aren't explicit in the program. So if you have a function with signature `Int -> Int` which also prints something, that is a side effecting program. But if you have a function `Int -> IO Int`, that is not a side effecting function. But it is a function with effects.

To complicate things, funprogrammers often like to talk about effects in a broader sense, such as errors and failure. But these effects are often perfectly pure; the effect is some kind of semantic that is layered over some other, existing function. Like an addition that either returns the result, or `Nothing` if the input is malformed in some way. This effect has nothing to do with IO. So what is a non-side effectful function, that does IO? Is it called an "IO effect"?

To make things even more complicated, apparently expressions with `IO` in them are pure. That is because it is left to the runtime to actually execute the IO action, or something. At this point, you might feel that they are yanking your chain and using the old "just solve this problem by adding another layer of indirection" (this layer being conceptual). I need to read more on this topic.

> Most programs raison d'être is to do IO

You can strengthen that; All programs that are to be evaluated need to do IO in order to do anything useful (heating the CPU is not useful). At least I can't think of any counter examples. Surely people like Haskell programmers know this, and the point of IO was never to ban IO in the first place. Just to make it explicit. (Hint: what's the point of having non-strict semantics if you have to assume that any expression may be side-effecting?)


I've only done a handful of Haskell tutorial, I mostly do Elixir (and sometimes Erlang) when it comes to FP. Elixir/Erlang is most definitely not pure or side-effect free.

I understand how IO can be made explicit, but I don't understand how it can be made pure. An addition might return `Nothing` if the input is malformed in some way as you say, but 1 + 1 will always return 2. On the other hand IO will return different values all the time. You can certainly make all of that explicit, but if given the same input you get different outputs, isn't that the opposite of "pure"?

I really have to dig into Haskell more...


> I understand how IO can be made explicit, but I don't understand how it can be made pure.

Expressions with IO in them do not "do" IO, they declare a composable specification for how IO is to be done.

When one of those expressions is "main", the runtime will do that specified IO (which, in the case of programs with nontrivial IO, will itself be composed of several other IO actions) when the program is run.

All the expressions containing IO are pure, because what they return every time they are called is the same specification of an IO action comprised of lower-level IO actions.

Values in the IO monad aren't the results of performing IO, they are procedures, in the imperative sense, that do IO. The "main" function of a Haskell program -- because it returns a value in that monad -- returns such a procedure, which the runtime executes.


Thanks for this clear explanation, the IO monad is starting to make sense to me now...


Functional programming isn't about being side-effects free, it's about controlling side effects.

You can't write (useful) pure programs, but pure functions.


>As you can see, notation in functional languages is much closer to classic mathematical notation.

I'm yet to be convinced that this is a good thing.


In the example in question, the author compares apples to oranges: an imperative iterative Fibonacci print-first-n-fibs vs. a functional recursive Fibonacci return-nth-fib. An imperative recursive Fibonacci return-nth-fib would be far closer to classical mathematical notation.

A good way to cut through the ideological BS is to look at how mathematicians write pseudo-code. It's almost always either pure imperative, or imperative with a tiny smattering of OOP/FP stuff (no thicker than you'd see in a sophisticated C project).


    int fib(int n)
    {
    	return (n <= 1) ? n : fib(n-1) + fib(n-2);
    }


For my part I'm absolutely convinced it's a horrible thing. The set of people that are comfortable with mathematical notation is tiny, even as a proportion of software developers.

For my part, I am very much in favour of a lot of what is espoused by FP advocates, but the typical approach to syntax is definitively not one of the things I'm in favour of, and I'm convinced that "funny syntax" is a key reason why so many of the ideas of both FP languages as well as many non-FP languages (like Smalltalk) with odd syntax end up getting reinvented over and over again decades after the fact in languages with less obstructive syntax.

But then again I'm one of those weirdos that believes that even CS papers substantially abuses and overuses mathematical notation where code or pseudo-code would communicate the ideas better and to a wider audience.

And before anyone complains about the lack of rigour if papers were to include pseudo-code: during my MSc research, of the 60 or so papers I reviewed, exactly none included sufficient detail in the notation they used to replicate their experiments directly even if you'd had access to the same data sets - there were always lots of variables where if you were lucky they'd defined an interval of reasonable values, or where they lack sufficient formality - e.g. they'd specified mathematical operations without taking into account rounding or precision, while experimentation made it clear their actual results depended heavily on how you'd round or what precision you'd use; often these flaws would have been blatantly obvious as fuzzy handwaving if they'd written it out as pseudo-code or "just" plain English. Over many years both before or since I've come to expect this as the norm for CS research, and I tend to see mathematical notation is a big warning sign that what follows is likely to be full of handwaving and missing details - the exceptions are few and far between.

We see some of the cost of this in the sheer amount of CS research that dies on the vine, without getting wider attention, because a lot of research requires "interpreters" with a foot in each camp, and the good ones are almost as rare as unicorns.

Personally I care much less about lists of language features than I care about whether typical programs read well and easily, and the only people I ever encounter that considers mathematical notation to read easily are mathematicians and a tiny, tiny subset of developers with sufficient interest in maths.

Using mathematical notation for programs is about as sensible as using some minor natural language you happen to be familiar with, but which most other developers aren't. Norwegian would work fine for me...


But then again I'm one of those weirdos that believes that even CS papers substantially abuses and overuses mathematical notation where code or pseudo-code would communicate the ideas better and to a wider audience.

Agreed. I've been studying DSP for the past two years and it's amazing how many fundamentally simple concepts are obscured by jargon and overly formal mathematical descriptions. I understand that it's important to define these things precisely and rigorously on some level but I think there are much more effective and less pedantic ways of teaching these ideas.


If only they did define these things precisely and rigorously. Often it's sloppier than a lot of pseudo-code I see.

E.g. I did my MSc on reducing error rates in OCR through various statistical methods. As part of that I needed to review a lot of papers applying various filters to input images. I went through at least a dozen papers on thresholding (filtering out noise by excluding all pixels under a certain brightness, for example)

Nearly all the papers had formulas. In this case it was fine. A simple-stupid (and not very good) thresholding function is simply (pseudo code): f(color) = intensity(color) > threshold ? color : 0. It's hard to obscure that by changing the notation - it'll be obvious in most cases what such a simple function does.

But nearly all of the results depended on specific values for specific variables in those formulas, and nearly all of them left out sufficient information about the values that worked best (or worked all) and/or did what I did above: define another function - intensity() in my case - that was left undefined, even though what method you'd use to determine intensity would have drastic impact on the results.


(Peer) review is the problem. Everyone's working hard to sound as intelligent and rigorous as possible, which often means committing literary atrocities. I'm writing my master thesis right now with passive voice and everything. Just because that's the way it should be done, apparently.

I've jokingly considered exchanging "time" with "temporal dimension main anthropoperceptory vector". It does sound more impressive, doesn't it? It's such a masquerade.


Agreed. I spent about half the time on my masters thesis making word and phrase changes that made the end result harder to read and understand.


I agree. I've tried to get into Clojure several times. The thing is I actually like the first code example. It's more explicit to me about what is happening.


Let me use Haskell here.

I don't think that Haskell's notation is as much mathematical[1] as it is similar, yet better than mathematical notation. Programmers have to be understood by an entity that does not understand hand-waving or shortcuts ("this is trivial"). What you sometimes end up with as a result is a concise yet easy to understand notation with a small set of rules to remember, as opposed to ad-hoc rules that can be overridden at any point for the author's convenience[2]. And you don't even need to install LaTeX.

I don't like to think of FP in the style of Haskell so much as mathematical as expression oriented. It's simple to see where an expression starts, and where it ends, and what subexpressions it consists of. It is simple to compose expressions, as opposed to statements which you typically just make longer lists of or stuff into procedures.

PS: Do note some researcher's/Haskell programmers fondness of writing "Haskell" as mathematical LaTeX - things like replacing certain standard operators for prettier ones, like replacing `.` with the dot-operator more usually associated with function composition. This makes research papers harder to understand if their goal is to appeal to Haskell programmers' existing knowledge. An even worse offense is Graham Hutton's "Programming in Haskell" introductory book which to some degree uses this practice in the code samples. Who on earth thought that would be appropriate for an introductory book?

[1] I'm referring to "math" as most people encounter it. Not any specific discipline or calculus, like the lambda ones.

[2] And yes, you should be able to expand macros and such, if they are in play.


As someone who worked with OCaml and C++, and now working with Scala, I must concur.

First weeks or months are hard, because you have a lot of new mechanism (and syntax) to learn. Then, when things gets easier, designing reusable components with function composition become almost systematic. In the end, you'll be able to write code that feels like poetry (it's very difficult to put words on that feeling, so I understand if it sounds like a bad piece of propaganda to you).


I'm interested if Scala's OOP hybridity leads to better readable code on a high level of abstraction than for example Haskell? Or do you mainly avoid the OOP features of the language?


I tend to avoid writing Scala code with lots of OOP patterns, because it really adds complexity for things you don't need to. The "Functional Way" wins every time in terms of readability and simplicity, when you are accustomed to this logic.

I would say, you can try at the beginning if it's easier for you to translate from OOP language with OOP concepts to FP, but eventually you'll end avoiding that.

If you are looking for a way to organize your code, I'll suggest to experiment with OCaml modules. I found them very clean.


In my own (very limited!) experience, Scala code is way harder to read than Haskell's. Even type signatures of Scala functions are harder to understand, and more verbose. Scala brings OOP to the table, but I wouldn't say this makes it a "higher level of abstraction" either. It does make it more complex, though.


You can do OOP-like code on Haskell too... Just don't show your code for experienced Haskell developers.


If it's the right solution then an experienced Haskell developer will congratulate you. If it's the wrong solution they'll point you towards a functional solution which likely works better with the language/paradigm.


That was intended as a joke, but not completely. Emulating OOP seems to be deeply frowned upon by Haskell developers.

That is, until you actually do it; I don't think I ever saw people complaining about any actual instance of OOP-like code, just about the idea.


> Emulating OOP seems to be deeply frowned upon by Haskell developers.

Are you sure? I don't think many frown upon the Yi developers for it:

http://yi-editor.github.io/2014/09/05/oop/

> That is, until you actually do it; I don't think I ever saw people complaining about any actual instance of OOP-like code, just about the idea.

Sometimes there is something to complain about ;)

I remember listening to a Haskell Cast with Michael Snoyman and/or Gabriel Gonzalez of the Conduit and Pipes fame... whoever it was mentioned that their early designs were very imperative because they thought it was necessary for performance.

However over time a functional API/core emerged and was more stable and performant.


This statement seems wrong:

"Functional programming languages fall under the declarative languages category, meaning algorithms are written in form of relations between steps"

The author also notes that productivity is a pro of FP. While I like much of the elegance of languages like Haskell and Lisp, its never been clear if it is actually more productive in writing real code -- and if so is it grossly more productive, or only by small margins.


The only evidence of productivity boost I know of is the following paper by Paul Hudak and Mark P. Jones: http://www.cs.yale.edu/publications/techreports/tr1049.pdf, where the conclusions seem to support the "gross improvement" hypothesis :)

In the paper, the Haskell implementations beat several other languages in multiple aspects:

- They are shorter.

- They are complete (i.e. compile and actually work).

- They include extra features.

This would seem to show Haskell is actually a pretty great language for rapid prototyping!

Unfortunately the experiment is marred by several SERIOUS flaws:

- The requirements were pretty vague, and it was pretty much up to each implementer how much to implement, and what.

- The results were self-reported. Here I trust Paul Hudak not to lie about Haskell, but it's a huge flaw in the study.

- The review board didn't attempt to execute the code, and in the case of Functional Programming, they didn't fully understand it and thought some of it was trickery (they called it "too cute for its own good").

At the end of the paper, the authors propose a new study to remedy these problems, but unfortunately it never happened as far as I know.


The difference between his two fibonacci implementations is recursiveness, not "fp-ness". You could write fib in a imperative style that is close to the mathematical definition and readable to boot, for example, in ruby:

    def fib(n) 
        case n
        when 0, 1
            n
        else
            fib(n-1) + fib(n-2)
        end
    end
  
    puts fib(10) # => 55


Why do you say this is an imperative style? Nothing is being mutated, just accumulated.


That's an interesting question, now you've made me think about it. Let me write the same program in ALGOL 60 first, which would have been considered an imperative language and was relatively "close to the metal" to our standards:

    integer procedure Fib (n); integer n;
    begin
        if n = 0 then
             Fib := 0
        else if n = 1 then
             Fib := 1
        else Fib := Fib(n-1) + Fib(n-2);
    end Fib
        
Besides the obvious differences in syntax, I'd say that the procedure/method/function/subroutine is the same. Still, a more fundamental difference is the "distance" from the machine. In ALGOL 60 we almost see through it the implementation in machine code / assembler (or whatever mix between the two were actually used in those days) where a call to a subroutine meant mutating state: the calls, their arguments, and the results had to kept track of on the stack. For ALGOL 60 programmers of the time, their operational understanding of this snippet and ALGOL was imperative.

Compare that to Ruby in its modern context. Implementation details have been abstracted away in both language and in programmers' operational understanding of how the example and Ruby works. If I use and understand a machine that speaks Ruby—a way to look at using a programming language is to imagine a machine that has that programming language as its machine code—, there's no mutation, only a variable sequence of calls to fib written succinctly in a recursive manner. How's that different from functional programming? To be honest, I don't know. Maybe we should start interpreting "imperative" and "functional" differently than our predecessors did. Or, maybe these distinctions are getting less and less meaningful the more abstract a language or computing environment?

(As an aside, it is also interesting you use "accumulated", as for most ALGOL 60 programmers the accumulator in the arithmetic unit of their massive computers would take a prominent place.)


Seeing all this incorrect Scheme code hurts...

    (define (lastHalf L N)
      (if (= N 0) L); Base Case
      (if (or (= N 1) (< N 2))
          (cdr L)
          ;else
          (lastHalf (cdr L) (- N 1)))
          )

    Would be:

    (define (last-half L N)
      (if (= N 0)
         L
        (if (< N 2)
          (cdr L)
          (last-half (cdr L) (- N 1)))))


While Haskell is undoubtedly better suited to writing concise implementations of sorting algorithms, the C++ versions given could have been written in a much shorter and clearer fashion.


For C/C++ all that is usually needed is to implement a few helper functions.

E.g. the quicksort example is one of my favourite pet peeves, because the reason the C version sorts in place is that the archetypical C implementation after Hoare sorts in place.

Many of the FP versions of quicksorts like this end up requiring a lot of additional memory. Maybe it is becoming reasonable to assume they will optimise it into an in-place sort these days, but it didn't use to be.

Meanwhile, with a couple of small, generic helper functions, you can get down to nearly the same complexity in C/C++. For C++ the only helper you absolutely need to cut down on the verbosity has been a part of the standard library since C++98

I wrote a blog post about this back in 2005 [1] about one of the other typical Haskell quicksort examples, and referenced an article by Alexandrescu that gave an example implementation of quicksort in C++ with the (generic) partitioning, as well as the pivot selection split out that looks like this:

        template <class Iter>
        void Sort(Iter b, Iter e)
        {
          const pair<Iter, Iter> midRange = Partition(b, e, SelectPivot(b, e));
          Sort(b, midRange.first);
          Sort(midRange.second, e);
        }
You can do the same in C easily enough. So what Haskell had over C/C++ in this respect was mainly a more expressive standard library. C++ has long had std::partition(), so it can be done entirely with the standard library too if you're willing to do the same (bad) fixed/dumb pivot selection that the typical Haskell versions (as well as the archetypical C version) use.

Unfortunately the link to the CUJ article is dead, as it was a quite fascinating walk through optimising sorting (including better pivot selection).

[1] http://www.hokstad.com/archives/2005/03/about_haskell_a


"In order to call a language purely functional, it has to have 6 distinct features"

Is there an international standard or a some sort of formal proof?


The good thing about standards is that each one can choose the one he likes. (I can't find the exact quote.)

Functional programming is like Object Oriented programming, each one has his own definition that is very similar to the other definitions but has a few tweaks to make possible to include his preferred language.

Anyway, I can't understand what this means: (from the article)

> Static linking: All variables are allocated at compile time and are statically linked. No need to go into more details.

Doe it mean no "eval" function?


No. At school, my teacher said that functional languages needed to have the 4 following features: procedures as data, tail-recursion, closure, garbage collector. There are many other features that define families of functional languages. homoiconicity vs rich syntax. lazy vs strict evaluation. pure vs mutable. static vs dynamic typing. call/cc, macros, ...


I'm not sure if there's anything inherently imperative about OO, as the author seems to imply. In addition, I'm not sure just how distanced most FP languages are from the von Neumann model. I know APL, J and K are categorically non-von Neumann.

Static linking

All variables are allocated at compile time and are statically linked. No need to go into more details.

I think they meant static binding?


'Functional' means a different way of thinking, not of coding.)

Also Functional != Haskell. Its laziness messed everything up. Functional subset of Scheme (without set!) is better to grasp the essential ideas (not cluttered with irrelevant types or cryptic syntax).


I'm not sure if this counts as going 'functional' but after having done the FP Scala course I tried out lodash, and now its the default dependency in all my web projects :D


Haskell is fun to play with, but most definitely isn't on the list of "languages the boss will actually let you use if you ask nicely." The JVM FP languages like Scala and Clojure are probably more worthwhile to learn, from that perspective (maybe F# if you're in .NET-land).

I learned Haskell first (man, what a slog that was) to come to grips with FP, then Scala so that I would actually have the opportunity to use a bit of functional programming on the job.

Update: Downvotes? Apparently I said something controversial here.


You're being downvoted for making the all too common suggestion that the only programming languages worthwhile are those that are already widely adopted in for-profit business. There are better programming languages out there than what your hypothetical boss will let you use, and they can be used for more than just "fun".


I do not regret learning Haskell and believe doing so was a worthwhile exercise. What I'm saying is that the number of jobs out there calling for Haskell, OCaml, etc. is so small that it is not reasonable to plan on being able to use such languages professionally.

There are many more programmers that would love to write Haskell for a living than there are job openings for Haskell developers. I have a strong dislike for programming in Java. But it pays the mortgage, so feh.


I'm using Haskell professionally right now.


My guess is because your post reads as if you were implying it was a mistake learning Haskell first. I'd say it was a big win; I'd recommend learning a principled FP language like Haskell first, then a hybrid language like Scala. To me it's harder to learn FP principles in Scala, where it's so easy to cheat and revert to the plain old Java ways... It's harder to see the point of many FP techniques when you can just go back to imperative programming whenever you hit a roadblock.

Probably also because Haskell is not only "fun to play with", but actually a capable language that can be used for real work (if only your boss would let you, of course, and that's a big "if").


It's a me, the author. Let me start.

First of all, I would like to address the 6 distinct features I was talking about in the theoretical introduction. Those 6 features/rules are described in the same way in a book called "Introduction to Scheme". I'm guessing I haven't explained it very well. Basically, what I meant is that those are the rules functional languages should follow in order to be called pure functional languages. I didn't say that that is the case with most languages, nor that this is the case with Haskell or Scheme. I know most of them don't, but that's just a piece of theory. And yes, I did mean static binding, I apologize, since linking and binding are denoted by the same word in my language.

Next thing, about the algorithms. Fibonacci example was just to show closeness to mathematical notation and I know it's probably the worst way to use recursion because of the O(2^n) time complexity. But, again, complexity was not the point of the example. About the notation, I'm guessing it's just me then. I was a lot more comfortable with pure mathematical notation, as I like things to be formal as much as they can, and that's something that mathematics provide.

The whole point of the post itself was to give a small introduction to functional programming and show off features that may not be available in most imperative languages.

Of course, as I've also said in the outro, I'm not really planning on switching to purely functional languages, I was just trying to say that they can be fun in some situations and for some problem solving. The different way of thinking is what makes it so fun, at least for me.

About higher-order functions, I truly was mistaken, as I've thought that was a thing for functional languages only. Guess I don't have that much experience in the imperative world, as I've never came across them in my line of work.

So please, don't take this article as anti-imperative, as it's most certainly not.

So, tl;dr, I instantly liked functional programming and I wanted to share some features that I liked the most, and haven't seen in imperative way.

I do apologize for misleading/possibly incorrect parts, and I shall edit those in my earliest convenience.


That is not a good font. Very hard to read on my screen (non-retina).


Even on a retina screen : the font need to be bolder since the background is darker. And a bit small too.


`reduce` is rather `foldl` than `foldr`.


> Few months ago, I came across a new paradigm (to me, at least) called functional programming.

You can never go back, at least from the perspective of someone two months into learning it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: