So here goes, Why Functional Programming Still Matters in 2012:
In the introduction John stated a few facts that blew my mind:
- Functional programs contain no assignment statements. Variables are given a value once, and then they never change. (otherwise known as immutability)
- Functional programs contain no side-effects at all (and less bugs), since "a function call can have no effect other than to compute its result." Reminds me of a chapter i just finished reading in DDD
- Expressions can be freely replaced with variables and (more importantly in my opinion) vice versa. It's so important that I want to write it out-- variables can be replaced by expressions
- Functional programs are more modular, and functionality is easier to swap
- Programmers who take the functional program route are an order of magnitude faster than ones that do not
At this point you should want to find out why functional programming is the greatest thing in the world. At least that's where i was at mentally at this point. I don't mean to spoil it for you, but as we read we'll find out that the key to functional programming's greatness is its modularity.
An Analogy with Structured Programming
John unofficially defines a structured program as something that allows for modularity. Anyone who has done anything even remotely serious in programming knows how important modularity is. I really like John's quote on the matter:
When writing a modular program to solve a problem, one first divides the problem into sub-problems, then solves the sub-problems and combines the solutions. The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase ones ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language.I think all of us can speak to that in one shape or another-- we create functions that plug in to other functions that plug into the main execution of the program and it's all reusable and all that-- but the point John is trying to make is that the difference between functional programs and others is where you're able to be modular.
So in other words, the argument is that "functional programming has the best glue." Let's start taking a look at the sort of glue John would have us use.
Interlude-- A Bit of Lisp
Before we get there we need to bring the conversation into the present. IMO John has a little something going for him and his argument--Lisp. Yes, although Lisp is years old it's really advantageous for functional programming. Why? To start with, because its lists are functions. (This is all in my opinion, please note that i am not a Lisp-master and only briefly messed around with languages like Lisp and MIT Scheme in college @ Wheeling Jes)
I know you need that explained. In Lisp, lists are created using the function
cons. So what we call
(cons 1 (cons 2 nil)) in Lisp. Simply put,
cons means "make the two things after me into a list." The
nil you see there is null in Lisp, which essentially in this example means "end of the list." So the second half
(cons 2 nil) means "make 2 and nothing else into a list." Lisp is also written in prefix notation, meaning that
(+ 2 2) in Lisp--the function/operator comes first. It looks wierd but you could get used to it.
Superify(2,2). In Lisp, we would only have to swap all instances of the
+ operator to make
(Superify 1(Superify 2 nil)) and we're all done.
And there's more that we'll get into later... back to the paper.
Glueing Functions Together
If i asked the majority of programmers to make a function that adds up items in a list, i'd probably get something like so:
If i then asked them to create a function that multiplies all the items in a list, i'd probably get someting like so:
A lot of duplicate code between those two, and not a ton of modularity-- take a look. Both are iterating through a list, doing something with the item, and then returning the result. Further, each has its own default/start value in the form of
sum--notice how its defaulted to 1 for multiplication and 0 for sum?
John discussed this, suggesting instead that we make the code more modular by creating a function that allows us to keep the similarities, and pass in the differences. In his paper, he suggested a function called
reduce with the form
reduce(func, list, a), where
func is a function to apply to each item,
list is a list/array of objects, and
a is what to substitute in instead of
nil, or what to use as a default.
Look at how modular that is-- we can create new functions that work with a list of items and easily get them up and running without rewriting all of that boilerplate iteration code. Hughes calls functions like
reduce "higher order functons."
John also describes a function that walks over a list, applying a function to each item in the list and returning a new list containing the results--
map. Here is an example of
map written in JS.
So let's say we wanted to print out all the items in an array, we could do that pretty simply with the
map function like so:
map(alert,anArray) and skip all the boilerplate for-each-ing.
Where to Go From Here
One more small piece of advice-- don't write your own map/reduce. Use something like underscore.js that is as browser-optimized as possible.