Purely Functional Data Structures

Chris Okasaki

Mentioned 52

This book describes data structures and data structure design techniques for functional languages.

More on Amazon.com

Mentioned in questions and answers.

Does anyone know what is the worst possible asymptotic slowdown that can happen when programming purely functionally as opposed to imperatively (i.e. allowing side-effects)?

Clarification from comment by itowlson: is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?

According to Pippenger [1996], when comparing a Lisp system that is purely functional (and has strict evaluation semantics, not lazy) to one that can mutate data, an algorithm written for the impure Lisp that runs in O(n) can be translated to an algorithm in the pure Lisp that runs in O(n log n) time (based on work by Ben-Amram and Galil [1992] about simulating random access memory using only pointers). Pippenger also establishes that there are algorithms for which that is the best you can do; there are problems which are O(n) in the impure system which are Ω(n log n) in the pure system.

There are a few caveats to be made about this paper. The most significant is that it does not address lazy functional languages, such as Haskell. Bird, Jones and De Moor [1997] demonstrate that the problem constructed by Pippenger can be solved in a lazy functional language in O(n) time, but they do not establish (and as far as I know, no one has) whether or not a lazy functional language can solve all problems in the same asymptotic running time as a language with mutation.

The problem constructed by Pippenger requires Ω(n log n) is specifically constructed to achieve this result, and is not necessarily representative of practical, real-world problems. There are a few restrictions on the problem that are a bit unexpected, but necessary for the proof to work; in particular, the problem requires that results are computed on-line, without being able to access future input, and that the input consists of a sequence of atoms from an unbounded set of possible atoms, rather than a fixed size set. And the paper only establishes (lower bound) results for an impure algorithm of linear running time; for problems that require a greater running time, it is possible that the extra O(log n) factor seen in the linear problem may be able to be "absorbed" in the process of extra operations necessary for algorithms with greater running times. These clarifications and open questions are explored briefly by Ben-Amram [1996].

In practice, many algorithms can be implemented in a pure functional language at the same efficiency as in a language with mutable data structures. For a good reference on techniques to use for implementing purely functional data structures efficiently, see Chris Okasaki's "Purely Functional Data Structures" [Okasaki 1998] (which is an expanded version of his thesis [Okasaki 1996]).

Anyone who needs to implement algorithms on purely-functional data structures should read Okasaki. You can always get at worst an O(log n) slowdown per operation by simulating mutable memory with a balanced binary tree, but in many cases you can do considerably better than that, and Okasaki describes many useful techniques, from amortized techniques to real-time ones that do the amortized work incrementally. Purely functional data structures can be a bit difficult to work with and analyze, but they provide many benefits like referential transparency that are helpful in compiler optimization, in parallel and distributed computing, and in implementation of features like versioning, undo, and rollback.

Note also that all of this discusses only asymptotic running times. Many techniques for implementing purely functional data structures give you a certain amount of constant factor slowdown, due to extra bookkeeping necessary for them to work, and implementation details of the language in question. The benefits of purely functional data structures may outweigh these constant factor slowdowns, so you will generally need to make trade-offs based on the problem in question.

References

I'm going to be teaching a lower-division course in discrete structures. I have selected the text book Discrete Structures, Logic, and Computability in part because it contains examples and concepts that are conducive to implementation with a functional programming language. (I also think it's a good textbook.)

I want an easy-to-understand FP language to illustrate DS concepts and that the students can use. Most students will have had only one or two semesters of programming in Java, at best. After looking at Scheme, Erlang, Haskell, Ocaml, and SML, I've settled on either Haskell or Standard ML. I'm leaning towards Haskell for the reasons outlined below, but I'd like the opinion of those who are active programmers in one or the other.

  • Both Haskell and SML have pattern matching which makes describing a recursive algorithm a cinch.
  • Haskell has nice list comprehensions that match nicely with the way such lists are expressed mathematically.
  • Haskell has lazy evaluation. Great for constructing infinite lists using the list comprehension technique.
  • SML has a truly interactive interpreter in which functions can be both defined and used. In Haskell, functions must be defined in a separate file and compiled before being used in the interactive shell.
  • SML gives explicit confirmation of the function argument and return types in a syntax that's easy to understand. For example: val foo = fn : int * int -> int. Haskell's implicit curry syntax is a bit more obtuse, but not totally alien. For example: foo :: Int -> Int -> Int.
  • Haskell uses arbitrary-precision integers by default. It's an external library in SML/NJ. And SML/NJ truncates output to 70 characters by default.
  • Haskell's lambda syntax is subtle -- it uses a single backslash. SML is more explicit. Not sure if we'll ever need lambda in this class, though.

Essentially, SML and Haskell are roughly equivalent. I lean toward Haskell because I'm loving the list comprehensions and infinite lists in Haskell. But I'm worried that the extensive number of symbols in Haskell's compact syntax might cause students problems. From what I've gathered reading other posts on SO, Haskell is not recommended for beginners starting out with FP. But we're not going to be building full-fledged applications, just trying out simple algorithms.

What do you think?


Edit: Upon reading some of your great responses, I should clarify some of my bullet points.

In SML, there's no syntactic distinction between defining a function in the interpreter and defining it in an external file. Let's say you want to write the factorial function. In Haskell you can put this definition into a file and load it into GHCi:

fac 0 = 1
fac n = n * fac (n-1)

To me, that's clear, succinct, and matches the mathematical definition in the book. But if you want to write the function in GHCi directly, you have to use a different syntax:

let fac 0 = 1; fac n = n * fac (n-1)

When working with interactive interpreters, from a teaching perspective it's very, very handy when the student can use the same code in both a file and the command line.

By "explicit confirmation of the function," I meant that upon defining the function, SML right away tells you the name of the function, the types of the arguments, and the return type. In Haskell you have to use the :type command and then you get the somewhat confusing curry notation.

One more cool thing about Haskell--this is a valid function definition:

fac 0 = 1
fac (n+1) = (n+1) * fac n

Again, this matches a definition they might find in the textbook. Can't do that in SML!

Much as I love Haskell, here are the reasons I would prefer SML for a class in discrete math and data structures (and most other beginners' classes):

  • Time and space costs of Haskell programs can be very hard to predict, even for experts. SML offers much more limited ways to blow the machine.

  • Syntax for function defintion in an interactive interpreter is identical to syntax used in a file, so you can cut and paste.

  • Although operator overloading in SML is totally bogus, it is also simple. It's going to be hard to teach a whole class in Haskell without having to get into type classes.

  • Student can debug using print. (Although, as a commenter points out, it is possible to get almost the same effect in Haskell using Debug.Trace.trace.)

  • Infinite data structures blow people's minds. For beginners, you're better off having them define a stream type complete with ref cells and thunks, so they know how it works:

    datatype 'a thunk_contents = UNEVALUATED of unit -> 'a
                               | VALUE of 'a
    type 'a thunk = 'a thunk_contents ref
    val delay : (unit -> 'a) -> 'a thunk
    val force : 'a thunk -> 'a
    

    Now it's not magic any more, and you can go from here to streams (infinite lists).

  • Layout is not as simple as in Python and can be confusing.

There are two places Haskell has an edge:

  • In core Haskell you can write a function's type signature just before its definition. This is hugely helpful for students and other beginners. There just isn't a nice way to deal with type signatures in SML.

  • Haskell has better concrete syntax. The Haskell syntax is a major improvement over ML syntax. I have written a short note about when to use parentheses in an ML program; this helps a little.

Finally, there is a sword that cuts both ways:

  • Haskell code is pure by default, so your students are unlikely to stumble over impure constructs (IO monad, state monad) by accident. But by the same token, they can't print, and if you want to do I/O then at minumum you have to explain do notation, and return is confusing.

On a related topic, here is some advice for your course preparation: don't overlook Purely Functional Data Structures by Chris Okasaki. Even if you don't have your students use it, you will definitely want to have a copy.

One of the arguments I've heard against functional languages is that single assignment coding is too hard, or at least significantly harder than "normal" programming.

But looking through my code, I realized that I really don't have many (any?) use patterns that can't be written just as well using single assignment form if you're writing in a reasonably modern language.

So what are the use cases for variables that vary within a single invocation of their scope? Bearing in mind that loop indexes, parameters, and other scope bound values that vary between invocations aren't multiple assignments in this case (unless you have to change them in the body for some reason), and assuming that you are writing in something a far enough above the assembly language level, where you can write things like

values.sum

or (in case sum isn't provided)

function collection.sum --> inject(zero, function (v,t) --> t+v )

and

x = if a > b then a else b

or

n = case s 
  /^\d*$/ : s.to_int
  ''      : 0
  '*'     : a.length
  '?'     : a.length.random
  else    fail "I don't know how many you want"

when you need to, and have list comprehensions, map/collect, and so forth available.

Do you find that you still want/need mutable variables in such an environment, and if so, what for?

To clarify, I'm not asking for a recitation of the objections to SSA form, but rather concrete examples where those objections would apply. I'm looking for bits of code that are clear and concise with mutable variables and couldn't be written so without them.

My favorite examples so far (and the best objection I expect to them):

  1. Paul Johnson's Fisher-Yates algorithm answer, which is pretty strong when you include the big-O constraints. But then, as catulahoops points out, the big-O issue isn't tied to the SSA question, but rather to having mutable data types, and with that set aside the algorithm can be written rather clearly in SSA:

     shuffle(Lst) ->
         array:to_list(shuffle(array:from_list(Lst), erlang:length(Lst) - 1)).
     shuffle(Array, 0) -> Array;
     shuffle(Array, N) ->
         K = random:uniform(N) - 1,
         Ek = array:get(K, Array),
         En = array:get(N, Array),
         shuffle(array:set(K, En, array:set(N, Ek, Array)), N-1).
    
  2. jpalecek's area of a polygon example:

    def area(figure : List[Point]) : Float = {
      if(figure.empty) return 0
      val last = figure(0)
      var first= figure(0)
      val ret = 0
      for (pt <- figure) {
        ret+=crossprod(last - first, pt - first)
        last = pt
      }
      ret
    }
    

    which might still be written something like:

    def area(figure : List[Point]) : Float = {
        if figure.length < 3
            0
          else
            var a = figure(0)
            var b = figure(1)
            var c = figure(2)
            if figure.length == 3
                magnitude(crossproduct(b-a,c-a))
              else 
                foldLeft((0,a,b))(figure.rest)) { 
                   ((t,a,b),c) => (t+area([a,b,c]),a,c)
                   }
    

    Or, since some people object to the density of this formulation, it could be recast:

    def area([])    = 0.0   # An empty figure has no area
    def area([_])   = 0.0   # ...nor does a point
    def area([_,_]) = 0.0   # ...or a line segment
    def area([a,b,c]) =     # The area of a triangle can be found directly
        magnitude(crossproduct(b-a,c-a))
    def area(figure) =      # For larger figures, reduce to triangles and sum
        as_triangles(figure).collect(area).sum
    
    def as_triangles([])      = []  # No triangles without at least three points
    def as_triangles([_])     = []
    def as_triangles([_,_])   = []
    def as_triangles([a,b,c | rest) = [[a,b,c] | as_triangles([a,c | rest])]
    
  3. Princess's point about the difficulty of implementing O(1) queues with immutable structures is interesting (and may well provide the basis for a compelling example) but as stated it's fundamentally about the mutability of the data structure, and not directly about the multiple assignment issue.

  4. I'm intrigued by the Sieve of Eratosthenes answer, but unconvinced. The proper big-O, pull as many primes as you'd like generator given in the paper he cited does not look easy to implement correctly with or without SSA.


Well, thanks everyone for trying. As most of the answers turned out to be either 1) based on mutable data structures, not on single-assignment, and 2) to the extent they were about single assignment form easily countered by practitioners skilled in the art, I'm going to strike the line from my talk and / or restructure (maybe have it in backup as a discussion topic in the unlikely event I run out of words before I run out of time).

Thanks again.

I think you'll find the most productive languages allow you to mix functional and imperative styles, such as OCaml and F#.

In most cases, I can write code which is simply a long line of "map x to y, reduce y to z". In 95% of cases, functional programming simplifies my code, but there is one area where immutability shows its teeth:

The wide disparity between the ease of implementing and immutable stack and an immutable queue.

Stacks are easy and mesh well with persistence, queues are ridiculous.

The most common implementations of immutable queues use one or more internal stacks and stack rotations. The upside is that these queues run in O(1) most of the time, but some operations will run in O(n). If you're relying on persistence in your application, then its possible in principle that every operation runs in O(n). These queues are no good when you need realtime (or at least consistent) performance.

Chris Okasaki's provides an implementation of immutable queues in his book, they use laziness to achieve O(1) for all operations. Its a very clever, reasonably concise implementation of a realtime queue -- but it requires deep understanding of its underlying implementation details, and its still an order of magnitude more complex than an immutable stack.

In constrast, I can write a stack and queue using mutable linked lists which run in constant time for all operations, and the resulting code would be very straightforward.


Regarding the area of a polygon, its easy to convert it to functional form. Let's assume we have a Vector module like this:

module Vector =
    type point =
        { x : float; y : float}
        with
            static member ( + ) ((p1 : point), (p2 : point)) =
                { x = p1.x + p2.x;
                  y = p1.y + p2.y;}

            static member ( * ) ((p : point), (scalar : float)) =
                { x = p.x * scalar;
                  y = p.y * scalar;}

            static member ( - ) ((p1 : point), (p2 : point)) = 
                { x = p1.x - p2.x;
                  y = p1.y - p2.y;}

    let empty = { x = 0.; y = 0.;}
    let to_tuple2 (p : point) = (p.x, p.y)
    let from_tuple2 (x, y) = { x = x; y = y;}
    let crossproduct (p1 : point) (p2 : point) =
        { x = p1.x * p2.y; y = -p1.y * p2.x }

We can define our area function using a little bit of tuple magic:

let area (figure : point list) =
    figure
    |> Seq.map to_tuple2
    |> Seq.fold
        (fun (sum, (a, b)) (c, d) -> (sum + a*d - b*c, (c, d) ) )
        (0., to_tuple2 (List.hd figure))
    |> fun (sum, _) -> abs(sum) / 2.0

Or we can use the cross product instead

let area2 (figure : point list) =
    figure
    |> Seq.fold
        (fun (acc, prev) cur -> (acc + (crossproduct prev cur), cur))
        (empty, List.hd figure)
    |> fun (acc, _) -> abs(acc.x + acc.y) / 2.0

I don't find either function unreadable.

What would be an idiomatic way to represent a tree in Clojure? E.g.:

     A
    / \
   B   C
  /\    \
 D  E    F

Performance is not important and the trees won't grow past 1000 elements.

Trees underly just about everything in Clojure because they lend themselves so nicely to structural sharing in persistent data structure. Maps and Vectors are actually trees with a high branching factor to give them bounded lookup and insert time. So the shortest answer I can give (though it's not really that useful) is that I really recommend Purely functional data structures by Chris Okasaki for a real answer to this question. Also Rich Hickey's video on Clojure data structures on blip.tv

(set 'A 'B 'C)

Basically, I know how to create graph data structures and use Dijkstra's algorithm in programming languages where side effects are allowed. Typically, graph algorithms use a structure to mark certain nodes as 'visited', but this has side effects, which I'm trying to avoid.

I can think of one way to implement this in a functional language, but it basically requires passing around large amounts of state to different functions, and I'm wondering if there is a more space-efficient solution.

I just keep the visited set as a set and pass it as a parameter. There are efficient log-time implementations of sets of any ordered type and extra-efficient sets of integers.

To represent a graph I use adjacency lists, or I'll use a finite map that maps each node to a list of its successors. It depends what I want to do.

Rather than Abelson and Sussman, I recommend Chris Okasaki's Purely Functional Data Structures. I've linked to Chris's dissertation, but if you have the money, he expanded it into an excellent book.


Just for grins, here's a slightly scary reverse postorder depth-first search done in continuation-passing style in Haskell. This is straight out of the Hoopl optimizer library:

postorder_dfs_from_except :: forall block e . (NonLocal block, LabelsPtr e)
                          => LabelMap (block C C) -> e -> LabelSet -> [block C C]
postorder_dfs_from_except blocks b visited =
 vchildren (get_children b) (\acc _visited -> acc) [] visited
 where
   vnode :: block C C -> ([block C C] -> LabelSet -> a) 
                      -> ([block C C] -> LabelSet -> a)
   vnode block cont acc visited =
        if setMember id visited then
            cont acc visited
        else
            let cont' acc visited = cont (block:acc) visited in
            vchildren (get_children block) cont' acc (setInsert id     visited)
      where id = entryLabel block
   vchildren bs cont acc visited = next bs acc visited
      where next children acc visited =
                case children of []     -> cont acc visited
                                 (b:bs) -> vnode b (next bs) acc     visited
   get_children block = foldr add_id [] $ targetLabels bloc
   add_id id rst = case lookupFact id blocks of
                      Just b -> b : rst
                      Nothing -> rst

Most functional languages support inner functions. So you can just create your graph representation in the outermost layer and just reference it from the inner function.

This book covers it extensively http://www.amazon.com/gp/product/0262510871/ref=pd_lpo_k2_dp_sr_1?ie=UTF8&cloe_id=aa7c71b1-f0f7-4fca-8003-525e801b8d46&attrMsgId=LPWidget-A1&pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0262011530&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=114DJE8K5BG75B86E1QS

This question started out from

  1. My translating of "ML for the Working Programmer" (WorldCat) by L. C. PAULSON to F# which uses functors for the examples.
  2. Eventual desire to translate "Purely Functional Data Structures" (WorldCat) by Chris Okasaki which uses functors.
  3. Reading "CATEGORIES TYPES AND STRUCTURES - An Introduction to Category Theory for the working computer scientist" (WorldCat) by Andrea Asperti and Giuseppe Longo.
  4. Not understanding it all, mostly the category theory.

SML.NET can do functors and worked with Microsoft .NET.
* See: SML.NET User Guide Section 4.8.2 Class types and functors?

I keep seeing that F# cannot do true functors because of some limitation in Microsoft .NET.
* Can ML functors be fully encoded in .NET (C#/F#)?
* Any workaround for functor?

So if SML.NET could do functors on .NET then why can't F#? What did SML.NET do that F# can't?

The more I learn about functors coming from category theory, the more I see the beauty of them and desire to have them in F#.

EDIT

In a pursuit to better understand the relation between category theory and functional programming see these Q&A at CS:StackExchange.

There's no fundamental limitation of .NET that stops functors from being implemented in F#. True, they can't be represented directly in .NET metadata, but neither can other F# language features like union types. Compilers for languages with functors (e.g., Standard ML, OCaml) have a pass called defunctorize; it works just like C++ template expansion, in that it "flattens" the functors by specializing them into normal modules.

The F# compiler could do the same thing, but you then have to ask: how will this be exposed to other .NET languages? Since functors can't be directly encoded in the .NET type system, you'd need to come up with some way to represent them; and if that representation is difficult/impossible to use from C# or VB.NET, would it still make sense to include F# functors? A non-trivial part of F#'s success comes from it's ability to easily interop (in both directions) with C# and VB.NET.

EDIT: Don't get me wrong -- I'd love to have functors in F#, they'd be really useful to handle a few cases which are currently painful and/or impossible to implement without them. I'm just pointing out that the main reason the language doesn't yet (and maybe won't ever) have functors is that the interop issue hasn't been solved; the metadata-encoding issue is actually the easy part.

EDIT 2: Code for the defunctorize pass of MLton: defunctorize.fun

Update: I had a thought about how functors actually could be expressed within the .NET type system, so I put together a little experiment. It isn't pretty, but it works -- so now we know it's at least plausible that F# could one day support functors. In practice, the complexity you see in my experimental code would all be hidden by the compiler/language. If you want to check it out: experimental-functors

I'm an OK C/C++ programmer. I find Haskell very intriguing. But it seems to me, that although it's relatively easy to write clean Haskell code, as it mimics math (which I'm very comfortable with) pretty well, it's very hard to write clean code in Haskell that runs fast.

A faster version of quicksort of Haskell is very long and scary, which has no resemblance to the naive but short (two lines), clean and intuitive implementation. The long and scary version of Haskell is actually still much slower than the shorter and simpler C counter part.

Is it because the current Haskell compiler is too dumb or is it just impossible for mortals (other than SJP of course) to write fast Haskell code?

You ask two different questions: learning and performance.

  • It took me about a month to become comfortable with functional programming using recursion, pattern matching, map, filter, and fold. I did all that with ML but it translated to Haskell very easily.
  • It took me two or three years to wrap my head around monads, but that's because I read the wrong stuff. I think there are better tutorials now. But if you're beginning, avoid monads for a while.
  • It took me several months to get good at creating new type classes, but using the existing ones was easy.
  • I'm still not sure I have the hang of lazy evaluation. But I love Haskell's purity and tend to treat lazy evaluation as an unhappy accident that only a few people (like John Hughes) know how to exploit.

You've observed a performance problem only because you've adapted an algorithm loaded with mutation, which Tony Hoare designed for imperative languages, and tried to translate into Haskell. In Haskell as in any other functional language the expensive operation is allocation. Try writing a merge sort and you'll find it's simple and performs very well.

How do you avoid making similar mistakes in the future? Have a look at Chris Okasaki's book Purely Functional Data Structures. Great book, and it will help you learn the 'functional way of doing things' without giving up performance.

I am currently working on React JS & React Native frameworks. On the half way road I came across Immutability or the Immutable-JS library, when I was reading about facebook's Flux implementation & Redux implementation.

The question is, why is immutability so important? What is wrong in mutating objects? Doesn't it make things simple?

Giving an example, Let us consider a simple News reader app. With the opening screen being a list view of news headlines.

If I set say an array of objects with a value initially. I can't manipulate it. That's what immutability principle says, right?(Correct me if I am wrong). But, what if I have a new News object that has to be updated? In usual case, I could have just added the object to the array. How do I achieve in this case? Delete the store & recreate it? Isn't adding an object to the array a less expensive operation?

PS: If the example is not the right way to explain immutability, please do let me know what's the right practical example.

I am trying to learn what's right here. Please do enlighten me :)

Although the other answers are fine, to address your question about a practical use case (from the comments on the other answers) lets step outside your running code for a minute and look at the ubiquitous answer right under your nose: git. What would happen if every time you pushed a commit you overwrote the data in the repository?

Now we're in to one of the problems that immutable collections face: memory bloat. Git is smart enough to not simply make new copies of files every time you make a change, it simply keeps track of the diffs.

While I don't know much about the inner workings of git, I can only assume it uses a similar strategy to that of libraries you reference: structural sharing. Under the hood the libraries use tries or other trees to only track the nodes that are different.

This strategy is also reasonably performant for in-memory data structures as there are well-known tree-operation algorithms that operate in logarithmic time.

Another use case: say you want an undo button on your webapp. With immutable representations of your data, implementing such is relatively trivial. But if you rely on mutation, that means you have to worry about caching the state of the world and making atomic updates.

In short, there's a price to pay for immutability in runtime performance and the learning curve. But any experienced programmer will tell you that debugging time outweighs code-writing time by an order of magnitude. And the slight hit on runtime performance is likely outweighed by the state-related bugs your users don't have to endure.

It is quite easy to fully understand standard Binary Search Tree and its operations. Because of that understanding, I even don't need to remember the implementations of those insert, delete, search operations.

I am learning Red-Black Tree now and I understand its properties for keeping the tree balanced. However I feel very hard to understand its insert and delete procedures.

I understand when inserting a new node, we mark the node as red (because red is the best we can do to avoid breaking less Red-Black tree laws). The new red node may still break the "no continuous red nodes law". Then we fix it via:

  1. check its uncle's colour, if red, then mark its parent and uncle as black, and go to grandparent.

  2. if it is right child, left rotate its parent

  3. mark its parent as black and its grandparent as red, then right rotate its grandparent.

done (basically like above).

Many places describes Red-Black tree's insert like above. They just tell you how to do it. But why those steps can fix the tree? Why first left rotate, and then right rotate?

Can anyone explains why to me more clearly, even more clear than CLRS? What's the magic of rotation?

I really wish to understand so after 1 year, I can implement Red-Black tree by myself without review a book.

Thanks

ignore my (now deleted) comment - i think okasaki's code is going to help you. if you have the book ("purely functional data structures"), look at the text on page 26 and figure 3.5 (facing, p 27). it's hard to get clearer than that.

unfortunately the thesis available on-line doesn't have that part.

i'm not going to copy it out because the diagram is important, but it shows that all the different cases are basically the same thing, and it gives some very simple ML code that hammers that home.

[update] it looks like you may be able to see this on amazon. go to the book's page, mouse over the image and enter "red black" in the search box. that gives you results that include pages 25 and 26, but you need to be logged on to see them (apparently - i haven't tried logging in to check).

I've been thinking for a while about how to go about implementing a deque (that is, a double-ended queue) as an immutable data structure.

There seem to be different ways of doing this. AFAIK, immutable data structures are generally hierarchical, so that major parts of it can be reused after modifying operations such as the insertion or removal of an item.

Eric Lippert has two articles on his blog about this topic, along with sample implementations in C#.

Both of his implementations strike me as more elaborate than is actually necessary. Couldn't deques simply be implemented as binary trees, where elements can only be inserted or removed on the very "left" (the front) and on the very "right" (the back) of the tree?

                               o
                              / \
                             …   …
                            /     \
                           …       …
                          / \     / \
              front -->  L   …   …   R  <-- back

Additionally, the tree would be kept reasonably balanced with rotations:

  • right rotations upon insertion at the front or upon removal from the back, and
  • left rotations upon removal from the front or insertion at the back.

Eric Lippert is, in my opinion, a very smart person whom I deeply respect, yet he apparently didn't consider this approach. Thus I wonder, was it for a good reason? Is my suggested way of implementing deques naïve?

As Daniel noted, implementing immutable deques with well-known balanced search trees like AVL or red-black trees gives Θ(lg n) worst case complexity. Some of the implementations Lippert discusses may seem elaborate at first glance, but there are many immutable deques with o(lg n) worst or average or amortized complexity that are built from balanced trees along with two simple ideas:

  1. Reverse the Spines

    To perform deque operations on a traditional balanced search tree, we need access to the ends, but we only have access to the center. To get to the left end, we must navigate left child pointers until we finally reach a dead end. It would be preferable to have a pointer to the left and right ends without all that navigation effort. In fact, we really don't need access to the root node very often. Let's store a balanced search tree so that access to the ends is O(1).

    Here is an example in C of how you might normally store an AVL tree:

    struct AVLTree {
      const char * value;
      int height;
      struct AVLTree * leftChild;
      struct AVLTree * rightChild;
    };
    

    To set up the tree so that we can start at the edges and move towards the root, we change the tree and store all of the pointers along the paths from the root to the left and rightmost children in reverse. (These paths are called the left and right spine, respectively). Just like reversing a singly-linked list, the last element becomes the first, so the leftmost child is now easily accessible.

    This is a little tricky to understand. To help explain it, imagine that you only did this for the left spine:

    struct LeftSpine {
      const char * value;
      int height;
      struct AVLTree * rightChild;
      struct LeftSpine * parent;
    };
    

    In some sense, the leftmost child is now the "root" of the tree. If you drew the tree this way, it would look very strange, but if you simply take your normal drawing of a tree and reverse all of the arrows on the left spine, the meaning of the LeftSpine struct should become clearer. Access to the left side of the tree is now immediate. The same can be done for the right spine:

    struct RightSpine {
      double value;
      int height;
      struct AVLTree * leftChild;
      struct RightSpine * parent;
    };
    

    If you store both a left and a right spine as well as the center element, you have immediate access to both ends. Inserting and deleting may still be Ω(lg n), because rebalancing operations may require traversing the entire left or right spine, but simply viewing to the left and rightmost elements is now O(1).

    An example of this strategy is used to make purely functional treaps with implementations in SML and Java (more documentation). This is also a key idea in several other immutable deques with o(lg n) performance.

  2. Bound the Rabalancing Work

    As noted above, inserting at the left or rightmost end of an AVL tree can require Ω(lg n) time for traversing the spine. Here is an example of an AVL tree that demonstrates this:

    A full binary tree is defined by induction as:

    • A full binary tree of height 0 is an empty node.
    • A full binary tree of height n+1 has a root node with full binary trees of height n as children.

    Pushing an element onto the left of a full binary tree will necessarily increase the maximum height of the tree. Since the AVL trees above store that information in each node, and since every tree along the left spine of a full binary tree is also a full binary tree, pushing an element onto the left of an AVL deque that happens to be a full binary tree will require incrementing Ω(lg n) height values along the left spine.

    (Two notes on this: (a) You can store AVL trees without keeping the height in the node; instead you keep only balance information (left-taller, right-taller, or even). This doesn't change the performance of the above example. (b) In AVL trees, you might need to do not only Ω(lg n) balance or height information updates, but Ω(lg n) rebalancing operations. I don't recall the details of this, and it may be only on deletions, rather than insertions.)

    In order to achieve o(lg n) deque operations, we need to limit this work. Immutable deques represented by balanced trees usually use at least one of the following strategies:

    • Anticipate where rebalancing will be needed. If you are using a tree that requires o(lg n) rebalancing but you know where that rebalancing will be needed and you can get there quickly enough, you can perform your deque operations in o(lg n) time. Deques that use this as a strategy will store not just two pointers into the deque (the ends of the left and right spines, as discussed above), but some small number of jump pointers to places higher along the spines. Deque operations can then access the roots of the trees pointed to by the jump pointers in O(1) time. If o(lg n) jump pointers are maintained for all of the places where rebalancing (or changing node information) will be needed, deque operations can take o(lg n) time.

      (Of course, this makes the tree actually a dag, since the trees on the spines pointed to by jump pointers are also pointed to by their children on the spine. Immutable data structures don't usually get along with non-tree graphs, since replacing a node pointed to by more than one other node requires replacing all of the other nodes that point to it. I have seen this fixed by just eliminating the non-jump pointers, turning the dag back into a tree. One can then store a singly-linked list with jump pointers as a list of lists. Each subordinate list contains all of the nodes between the head of that list and its jump pointer. This requires some care to deal with partially overlapping jump pointers, and a full explanation is probably not appropriate for this aside.)

      This is one of the tricks used by Tsakalidis in his paper "AVL Trees for localized search" to allow O(1) deque operations on AVL trees with a relaxed balance condition. It is also the main idea used by Kaplan and Tarjan in their paper "Purely functional, real-time deques with catenation" and a later refinement of that by Mihaesau and Tarjan. Munro et al.'s "Deterministic Skip Lists" also deserves a mention here, though translating skip lists to an immutable setting by using trees sometimes changes the properties that allow such efficient modification near the ends. For examples of the translation, see Messeguer's "Skip trees, an alternative data structure to Skip lists in a concurrent approach", Dean and Jones's "Exploring the duality between skip lists and binary search trees", and Lamoureux and Nickerson's "On the Equivalence of B-trees and deterministic skip lists".

    • Do the work in bulk. In the full binary tree example above, no rebalancing is needed on a push, but Ω(lg n) nodes need to have their height or balance information updated. Instead of actually doing the incrementation, you could simply mark the spine at the ends as needing incrementation.

      One way to understand this process is by analogy to binary numbers. (2^n)-1 is represented in binary by a string of n 1's. When adding 1 to this number, you need to change all of the 1's to 0's and then add a 1 at the end. The following Haskell encodes binary numbers as non-empty strings of bits, least significant first.

      data Bit = Zero | One
      
      type Binary = (Bit,[Bit])
      
      incr :: Binary -> Binary
      incr (Zero,x) = (One,x)
      incr (One,[]) = (Zero,[One])
      incr (One,(x:xs)) = 
          let (y,ys) = incr (x,xs)
          in (Zero,y:ys)
      

      incr is a recursive function, and for numbers of the form (One,replicate k One), incr calls itself Ω(k) times.

      Instead, we might represent groups of equal bits by only the number of bits in the group. Neighboring bits or groups of bits are combined into one group if they are equal (in value, not in number). We can increment in O(1) time:

      data Bits = Zeros Int | Ones Int
      
      type SegmentedBinary = (Bits,[Bits])
      
      segIncr :: SegmentedBinary -> SegmentedBinary
      segIncr (Zeros 1,[]) = (Ones 1,[])
      segIncr (Zeros 1,(Ones n:rest)) = (Ones (n+1),rest)
      segIncr (Zeros n,rest) = (Ones 1,Zeros (n-1):rest)
      segIncr (Ones n,[]) = (Zeros n,[Ones 1])
      segIncr (Ones n,(Zeros 1:Ones m:rest)) = (Zeros n,Ones (m+1):rest)
      segIncr (Ones n,(Zeros p:rest)) = (Zeros n,Ones 1:Zeros (p-1):rest)
      

      Since segIncr is not recursive and doesn't call any functions other than plus and minus on Ints, you can see it takes O(1) time.

      Some of the deques mentioned in the section above entitled "Anticipate where rebalancing will be needed" actually use a different numerically-inspired technique called "redundant number systems" to limit the rebalancing work to O(1) and locate it quickly. Redundant numerical representations are fascinating, but possibly too far afield for this discussion. Elmasry et al.'s "Strictly-regular number system and data structures" is not a bad place to start reading about that topic. Hinze's "Bootstrapping one-sided flexible arrays" may also be useful.

      In "Making data structures persistent", Driscoll et al. describe lazy recoloring, which they attribute to Tsakalidis. They apply it to red-black trees, which can be rebalanced after insertion or deletion with O(1) rotations (but Ω(lg n) recolorings) (see Tarjan's "Updataing a balanced tree in O(1) rotations"). The core of the idea is to mark a large path of nodes that need to be recolored but not rotated. A similar idea is used on AVL trees in the older versions of Brown & Tarjan's "A fast merging algorithm". (Newer versions of the same work use 2-3 trees; I have not read the newer ones and I do not know if they use any techniques like lazy recoloring.)

    • Randomize. Treaps, mentioned above, can be implemented in a functional setting so that they perform deque operations on O(1) time on average. Since deques do not need to inspect their elements, this average is not susceptible to malicious input degrading performance, unlike simple (no rebalancing) binary search trees, which are fast on average input. Treaps use an independent source of random bits instead of relying on randomness from the data.

      In a persistent setting, treaps may be susceptible to degraded performance from malicious input with an adversary who can both (a) use old versions of a data structure and (b) measure the performance of operations. Because they do not have any worst-case balance guarantees, treaps can become quite unbalanced, though this should happen rarely. If an adversary waits for a deque operation that takes a long time, she can initiate that same operation repeatedly in order to measure and take advantage of a possibly unbalanced tree.

      If this is not a concern, treaps are an attractively simple data structure. They are very close to the AVL spine tree described above.

      Skip lists, mentioned above, might also be amenable to functional implementations with O(1) average-time deque operations.

      The first two techniques for bounding the rebalancing work require complex modifications to data structures while usually affording a simple analysis of the complexity of deque operations. Randomization, along with the next technique, have simpler data structures but more complex analysis. The original analysis by Seidel and Aragon is not trivial, and there is some complex analysis of exact probabilities using more advanced mathematics than is present in the papers cited above -- see Flajolet et al.'s "Patterns in random binary search trees".

    • Amortize. There are several balanced trees that, when viewed from the roots up (as explained in "Reverse the Spines", above), offer O(1) amortized insertion and deletion time. Individual operations can take Ω(lg n) time, but they put the tree in such a nice state that a large number of operations following the expensive operation will be cheap.

      Unfortunately, this kind of analysis does not work when old versions of the tree are still around. A user can perform operations on the old, nearly-out-of-balance tree many times without any intervening cheap operations.

      One way to get amortized bounds in a persistent setting was invented by Chris Okasaki. It is not simple to explain how the amortization survives the ability to use arbitrary old versions of a data structure, but if I remember correctly, Okasaki's first (as far as I know) paper on the subject has a pretty clear explanation. For more comprehensive explanations, see his thesis or his book.

      As I understand it, there are two essential ingredients. First, instead of just guaranteeing that a certain number of cheap operations occur before each expensive operation (the usual approach to amortization) you actually designate and set up that specific expensive operation before performing the cheap operations that will pay for it. In some cases, the operation is scheduled to be started (and finished) only after many intervening cheap steps. In other cases, the operation is actually scheduled only O(1) steps in the future, but cheap operations may do part of the expensive operation and then reschedule more of it for later. If an adversary looking to repeat an expensive operation over and over again is actually reusing the same scheduled operation each time. This sharing is where the second ingredient comes in.

      The computation is set up using laziness. A lazy value is not computed immediately, but, once performed, its result is saved. The first time a client needs to inspect a lazy value, its value is computed. Later clients can use that cached value directly, without having to recompute it.

      #include <stdlib.h>
      
      struct lazy {
        int (*oper)(const char *);
        char * arg;
        int* ans;
      };
      
      typedef struct lazy * lazyop;
      
      lazyop suspend(int (*oper)(const char *), char * arg) {
        lazyop ans = (lazyop)malloc(sizeof(struct lazy));
        ans->oper = oper;
        ans->arg = arg;
        return ans;
      }
      
      void force(lazyop susp) {
        if (0 == susp) return;
        if (0 != susp->ans) return;
        susp->ans = (int*)malloc(sizeof(int));
        *susp->ans = susp->oper(susp->arg);
      }
      
      int get(lazyop susp) {
        force(susp);
        return *susp->ans;
      }
      

      Laziness constructs are included in some MLs, and Haskell is lazy by default. Under the hood, laziness is a mutation, which leads some authors to call it a "side effect". That might be considered bad if that kind of side effect doesn't play well with whatever the reasons were for selecting an immutable data structure in the first place, but, on the other hand, thinking of laziness as a side effect allows the application of traditional amortized analysis techniques to persistent data structures, as mentioned in a paper by Kaplan, Okasaki, and Tarjan entitled "Simple Confluently Persistent Catenable Lists".

      Consider again the adversary from above who is attempting to repeatedly force the computation of an expensive operation. After the first force of the lazy value, every remaining force is cheap.

      In his book, Okasaki explains how to build deques with O(1) amortized time required for each operation. It is essentially a B+-tree, which is a tree where all of the elements are stored at the leaves, nodes may vary in how many children they have, and every leaf is at the same depth. Okasaki uses the spine-reversal method discussed above, and he suspends (that is, stores as a lazy value) the spines above the leaf elements.

      A structure by Hinze and Paterson called "Finger trees: a simple general-purpose data structure" is halfway between the deques designed by Okasaki and the "Purely functional representations of catenable sorted lists" of Kaplan and Tarjan. Hinze and Paterson's structure has become very popular.

      As a evidence of how tricky the amortized analysis is to understand, Hinze and Paterson's finger trees are frequently implemented without laziness, making the time bounds not O(1) but still O(lg n). One implementation that seems to use laziness is the one in functional-dotnet. That project also includes an implementation of lazy values in C# which might help explain them if my explanation above is lacking.

Could deques be implemented as binary trees? Yes, and their worst-case complexity when used persistently would be no worse than those presented by Eric Lippert. However, Eric's trees are actually not complicated enough to get O(1) deque operations in a persistent setting, though only by a small complexity margin (making the center lazy) if you are willing to accept amortized performance. A different but also simple view of treaps can get O(1) expected performance in a functional setting, assuming an adversary who is not too tricky. Getting O(1) worst-case deque operations with a tree-like structure in a functional setting requires a good bit more complexity than Eric's implementations.


Two final notes (though this is a very interesting topic and I reserve the right to add more later) :-)

  1. Nearly all of the deques mentioned above are finger search trees as well. In a functional setting this means they can be split at the ith element in O(lg(min(i,n-i))) time and two trees of size n and m can be concatenated in O(lg(min(n,m))) time.

  2. I know of two ways of implementing deques that don't use trees. Okasaki presents one in his book and thesis and the paper I linked to above. The other uses a technique called "global rebuilding" and is presented in Chuang and Goldberg's "Real-time deques, multihead Turing machines, and purely functional programming".

In his seminal thesis, Chris Okasaki described the technique of data-structural bootstrapping. What work, if any, has been done to use this technique to improve locality in data structures?

For example, balanced binary trees are commonly used to create purely functional sets and dictionaries but a hash trie of small arrays are often significantly faster due to improved locality.

You could try references to his book by Haskell or Clojure folk rather than just the CMU pdf : e.g.,

http://www.amazon.com/Purely-Functional-Structures-Chris-Okasaki/dp/0521663504

There was a question here on SO at :

What is the benefit of purely functional data structure?

There is also Clojure area this :

https://github.com/viksit/clojure-datastructures

And there was this on SE :

http://cstheory.stackexchange.com/questions/1539/whats-new-in-purely-functional-data-structures-since-okasaki

Hope something there provides a basis for a search that bears results :-)

You may have to use an academic or biz ref search engine and you may want to look at poster sessions at a conf because search is not obvious here, e.g., Mercury can generate Erlang code ... so searching caching and locality with respect to performance in functional programming in some hardware area dealing with latency.

Canada'a National Research Council (NRC) had some work going on ... you could try a search of their pub's/notices/reports

But note: a search with

bigdata latency locality NRC 2012

gives rather different result from

bigdata functional latency locality NSF 2012

( and I would next drop the 2012 and try using the google search tool date range option for recent results)

I'm working on a small concept project in Haskell which requires a circular buffer. I've managed to create a buffer using arrays which has O(1) rotation, but of course requires O(N) for insertion/deletion. I've found an implementation using lists which appears to take O(1) for insertion and deletion, but since it maintains a left and right list, crossing a certain border when rotating will take O(N) time. In an imperative language, I could implement a doubly linked circular buffer with O(1) insertion, deletion, and rotation. I'm thinking this isn't possible in a purely functional language like Haskell, but I'd love to know if I'm wrong.

If you can deal with amortized O(1) operations, you could probably use either Data.Sequence from the containers package, or Data.Dequeue from the dequeue package. The former uses finger trees, while the latter uses the "Banker's Dequeue" from Okasaki's Purely Functional Data Structures (a prior version online here).

We are developing a small image processing library for Scala (student project). The library is completely functional (i.e. no mutability). The raster of image is stored as Stream[Stream[Int]] to exploit the benefits of lazy evaluation with least efforts. However upon performing a few operations on an image the heap gets full and an OutOfMemoryError is thrown. (for example, up to 4 operations can be performed on a jpeg image sized 500 x 400, 35 kb before JVM heap runs out of space.)

The approaches we have thought of are:

  • Twiddling with JVM options and increase the heap size. (We don't know how to do this under IDEA - the IDE we are working with.)
  • Choosing a different data structure than Stream[Stream[Int]], the one which is more suited to the task of image processing. (Again we do not have much idea about the functional data structures beyond the simple List and Stream.)

The last option we have is giving up on immutability and making it a mutable library (like the popular image processing libraries), which we don't really want to do. Please suggest us some way to keep this library functional and still functional, if you know what I mean.

Thank you,
Siddharth Raina.

ADDENDUM:
For an image sized 1024 x 768, the JVM runs out of heap space even for a single mapping operation. Some example code from our test:

val image = Image from "E:/metallica.jpg"
val redded = image.map(_ & 0xff0000)
redded.display(title = "Redded")

And the output:

"C:\Program Files (x86)\Java\jdk1.6.0_02\bin\java" -Didea.launcher.port=7533 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 10.0.2\bin" -Dfile.encoding=windows-1252 -classpath "C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\charsets.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\deploy.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\javaws.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\jce.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\jsse.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\management-agent.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\plugin.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\resources.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\rt.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\dnsns.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\localedata.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunjce_provider.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunmscapi.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunpkcs11.jar;C:\new Ph\Phoebe\out\production\Phoebe;E:\Inventory\Marvin.jar;C:\scala-2.8.1.final\lib\scala-library.jar;C:\scala-2.8.1.final\lib\scala-swing.jar;C:\scala-2.8.1.final\lib\scala-dbc.jar;C:\new Ph;C:\scala-2.8.1.final\lib\scala-compiler.jar;E:\Inventory\commons-math-2.2.jar;E:\Inventory\commons-math-2.2-sources.jar;E:\Inventory\commons-math-2.2-javadoc.jar;E:\Inventory\jmathplot.jar;E:\Inventory\jmathio.jar;E:\Inventory\jmatharray.jar;E:\Inventory\Javax Media.zip;E:\Inventory\jai-core-1.1.3-alpha.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 10.0.2\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain phoebe.test.ImageTest
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at scala.collection.Iterator$class.toStream(Iterator.scala:1011)
    at scala.collection.IndexedSeqLike$Elements.toStream(IndexedSeqLike.scala:52)
    at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1011)
    at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1011)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
    at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:168)
    at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:168)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
    at scala.collection.immutable.Stream$$anonfun$flatten1$1$1.apply(Stream.scala:453)
    at scala.collection.immutable.Stream$$anonfun$flatten1$1$1.apply(Stream.scala:453)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
    at scala.collection.immutable.Stream.length(Stream.scala:113)
    at scala.collection.SeqLike$class.size(SeqLike.scala:221)
    at scala.collection.immutable.Stream.size(Stream.scala:48)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:388)
    at scala.collection.immutable.Stream.toArray(Stream.scala:48)
    at phoebe.picasso.Image.force(Image.scala:85)
    at phoebe.picasso.SimpleImageViewer.<init>(SimpleImageViewer.scala:10)
    at phoebe.picasso.Image.display(Image.scala:91)
    at phoebe.test.ImageTest$.main(ImageTest.scala:14)
    at phoebe.test.ImageTest.main(ImageTest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)

Process finished with exit code 1

I strongly recommend Okasaki's Purely Functional Data Structures if you don't have any experience with functional data structures (as you seem to indicate).

In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program.

This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array.

So my questions are:

  • Are there really significantly different usage patterns for data structures between functional and imperative programming?
  • If so, is this a problem?
  • What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

The book Purely Functional Data Structures covers your questions in depth, and includes a great mix of theory and implementations primarily in ML - the appendix also contains Haskell implementations so you should be able to follow along with a bit of extra page turning. It is a pretty good (though difficult in parts) read if you are really interested in a thorough answer to your questions. Having said that I think ephemient gave a superb short answer.

edit: Steven Huwig provided a link to the thesis that the book started as. While I haven't read through it the only big thing missing (judging from the table of contents) are the Haskell implementations.

From what I understand, the list type in Haskell is implemented internally using a linked list. However, the user of the language does not get to see the details of the implementation, nor does he have the ability to modify the "links" that make up the linked list to allow it to point to a different memory address. This, I suppose, is done internally.

How then, can the list type be qualified as in Haskell ? Is it a "data type" or an "abstract data type"? And what of the linked list type of the implementation ?

Additionally, since the list type provided by the Prelude is not a linked list type, how can the basic linked list functions be implemented ?

Take, for example, this piece of code designed to add an element a at the index n of a list :

add [] acc _ _ = reverse acc
add (x:xs) acc 0 a = add xs (x:a:acc) (-1) a 
add (x:xs) acc n a = add xs (x:acc) (n-1) a

Using a "real" linked list, adding an element would just consist of modifying a pointer to a memory address. This is not possible in Haskell (or is it ?), thus the question : is my implementation of adding an element to a list the best possible one, or am I missing something (the use of the reverse function is, I think, particularly ugly, but is it possible to do without ?)

Please, do not hesitate to correct me if anything I have said is wrong, and thank you for your time.

Re: adding an element to the end of a List, I'd suggest using the (++) operator and splitAt function:

add xs a n = beg ++ (a : end)
  where
    (beg, end) = splitAt n xs

The List is a linked-list, but it's read-only. You can't modify a List in place - you instead create a new List structure which has the elements you want. I haven't read it, but this book probably gets at your underlying question.

HTH

A is an array of the integers from 1 to n in random order.

I need random access to the ith largest element of the first j elements in at least log time.

What I've come up with so far is an n x n matrix M, where the element in the (i, j) position is the ith largest of the first j. This gives me constant-time random access, but requires n^2 storage.

By construction, M is sorted by row and column. Further, each column differs from its neighbors by a single value.

Can anyone suggest a way to compress M down to n log(n) space or better, with log(n) or better random access time?

I believe you can perform the access in O(log(N)) time, given O(N log(N)) preprocessing time and O(N log(N)) extra space. Here's how.

You can augment a red-black tree to support a select(i) operation which retrieves the element at rank i in O(log(N)) time. For example, see this PDF or the appropriate chapter of Introduction to Algorithms.

You can implement a red-black tree (even one augmented to support select(i)) in a functional manner, such that the insert operation returns a new tree which shares all but O(log(N)) nodes with the old tree. See for example Purely Functional Data Structures by Chris Okasaki.

We will build an array T of purely functional augmented red-black trees, such that the tree T[j] stores the indexes 0 ... j-1 of the first j elements of A sorted largest to smallest.

Base case: At T[0] create an augmented red-black tree with just one node, whose data is the number 0, which is the index of the 0th largest element in the first 1 elements of your array A.

Inductive step: For each j from 1 to N-1, at T[j] create an augmented red-black tree by purely functionally inserting a new node with index j into the tree T[j-1]. This creates at most O(log(j)) new nodes; the remaining nodes are shared with T[j-1]. This takes O(log(j)) time.

The total time to construct the array T is O(N log(N)) and the total space used is also O(N log(N)).

Once T[j-1] is created, you can access the ith largest element of the first j elements of A by performing T[j-1].select(i). This takes O(log(j)) time. Note that you can create T[j-1] lazily the first time it is needed. If A is very large and j is always relatively small, this will save a lot of time and space.

I am starting to doubt if my plan of getting into Haskell and functional programming by using Haskell for my next course on algorithms is a good one.

To get some Haskell lines under my belt I started trying to implement some simple algos. First: Gale-Shapley for the Stable Marriage Problem. Having not yet gotten into monads, all that mutable state looks daunting, so instead I used the characterization of stable matchings as fixed-points of a mapping on the lattice of semi-matchings. It was fun, but its no longer Gale-Shapley and the complexity isn't nice (those chains in the lattice can get pretty long apparently :)

Next up I have the algorithm for Closest Pair of points in the plane, but am stuck on getting the usual O(n*log n) complexity because I can't work out how to get a set-like data structure with O(1) checking for membership.

So my question is: Can one in general implement most algorithms eg. Dijkstra, Ford-Fulkerson (Gale-Shapley !?) getting the complexities from procedural implementations if one gets a better command of Haskell and functional programming in general ?

This probably can't be answered in general. A lot of standard algorithms are designed around mutability, and translations exist in some cases, not in others. Sometimes alternate algorithms exist that give equivalent performance characteristics, sometimes you really do need mutability.

A good place to start, if you want understanding of how to approach algorithms in this setting, is Chris Okasaki's book Purely Functional Data Structures. The book is an expanded version of his thesis, which is available online in PDF format.

If you want help with specific algorithms, such as the O(1) membership checking (which is actually misleading--there's no such thing, such data structures usually have something like O(k) where k is the size of elements being stored) you'd be better off asking that as a specific, single question instead of a very general question like this.

I'm building an entire application out of immutable objects so that multi-threading and undo become easier to implement. I'm using the Google Collections Library which provides immutable versions of Map, List, and Set.

My application model looks like a tree:

  • Scene is a top-level object that contains a reference to a root Node.
  • Each Node can contain child Nodes and Ports.

An object graph might look like this:

Scene
 |
 +-- Node
      |
      +-- Node 
           |
           +- Port
      +-- Node
           |
           +- Port
           +- Port

If all of these objects are immutable, controlled by a top-level SceneController object:

  • What is the best way to construct this hierarchy?
  • How would I replace an object that is arbitrarily deep in the object tree?
  • Is there a way to support back-links, e.g. a Node having a "parent" attribute?

And more generally:

  • Have any patterns emerged for dealing with this type of data?
  • Is there (academic) literature available on the subject?
  • Is this a good idea?

There are two concepts of interest here. First, persistent data structures. If all elements of the tree are immutable, then one can derive a new tree from the original tree by replacing some parts, but referring to the older parts, thus saving time and memory.

For example, if you were to add a third Port to the Node that has two ports already, you'd have to create a new Scene, a new Scene's Node's descendant, and the Node that you are changing. The other Node and all of the Ports do not need to be created anew -- you just refer to them in the new Scene/Nodes.

The other concept is that of a Zipper. A zipper is a way to "navigate" through a persistent data structure to optimize local changes. For instance, if you added four new Ports instead of just one, but you added each Port one at a time, you'd have to create four new Scenes, and eight new Nodes. With a zipper, you defer such creations until you are done, saving up on those intermediary objects.

The best explanation I ever read about zipper is here.

Now, use of a zipper to navigate a data structure remove the need to have back-links. You can have back-links in an immutable structure, by clever use of recursive constructors. However, such a data structure would not be persistent. Non-persistent immutable data structures have lousy modification performance, because you need to copy the whole data each time.

As for academic literature, I recommend Purely Function Data Structures, by Okasaki (dissertation PDF, fully fledged book).

As I am in my starting career year in software development (C++ & C#) I now see my flaws and what I miss in this sphere. Because of that I came into some conclusions and made myself a plan to fill those gaps and increase my knowledge in software development. But the question I stumbled upon after making a tasks which I need to do has not quite obvious answer to me. What is the priority of those tasks? Here are these tasks and my priority by numbering:

Learning:

  1. Functional programming (Scala)
  2. Data structures & Algorithms (Cormen book to the rescue + TopCoder/ProjectEuler/etc)
  3. Design patterns (GOF or Head First)

Do you agree with this tasks and priorities? Or do I miss something here? Any suggestions are welcome!

I think you have it backwards. Start with design patterns, which will help you reduce the amount messy code you produce, and understand better code made by other people (particularly libraries written with design patterns in mind).

In addition to the book of four, there are many other design pattern books -- Patterns of Enterprise Application Architecture, for example. It might be worth looking at them after you get a good grounding. But I also highly recommend Domain Driven Design, which I think gives you a way of thinking about how to structure your program, instead of just identifying pieces here and there.

Next you can go with algorithms. I prefer Skiena's The Algorithm Design Manual, whose emphasis is more on getting people to know how to select and use algorithms, as well as building them from well known "parts" than on getting people to know to make proofs about algorithms. It is also available for Kindle, which was useful to me.

Also, get a good data structures book -- people often neglect that. I like the Handbook of Data Structures and Applications, though I'm also looking into Advanced Data Structures.

However, I cannot recommend either TopCoder or Euler for this task. TopCoder is, imho, mostly about writing code fast. Nothing bad about it, but it's hardly likely to make a difference on day-to-day stuff. If you like it, by all means do it. Also, it's excellent preparation for job interviews with the more technically minded companies.

Project Euler, on the other hand, is much more targeted at scientific computing, computer science and functional programming. It will be an excellent training ground when learning functional programming.

There's something that has a bit of design patterns, algorithms and functional programming, which is Elements of Programming. It uses C++ for its examples, which is a plus for you.

As for functional programming, I think it is less urgent than the other two. However, I indicate either Clojure or Haskell instead of Scala.

Learning functional programming in Scala is like learning Spanish in a latino neighborhood, while learning functional programming in Clojure is like learning Spanish in Madrid, and learning functional programming in Haskell is like learning Spanish in an isolated monastery in Spain. :-)

Mind you, I prefer Scala as a programming language, but I already knew FP when I came to it.

When you do get to functional programming, get Chris Okasaki's Purely Functional Data Structures, for a good grounding on algorithms and data structures for functional programming.

Beyond that, try to learn a new language every year. Even if not for the language itself, you are more likely to keep up to date with what people are doing nowadays.

As per the title.

I have the following code which creates a binary search tree, but if I want it created and changed dynamically with user input, how would I do that if I can't change the value of a variable in haskell?!?

find :: (Ord a) => Node a -> a -> Bool
find (Node val left right) s
    | s == val      = True
    | s < val       = find left s
    | s > val       = find right s

find Empty s = False

data Node a = Node a (Node a) (Node a)
              | Empty

myTree = Node "m"   (Node "a" Empty Empty)
                    (Node "z" Empty Empty)

Thanks in advance!

Dario gave a good direct answer. If you want more in-depth information, there's Purely Functional Data Structures by Chris Okasaki, an entire book on the subject. I bought it myself, but sadly, I don't have the time to experiment with the ideas.

I'm wondering if there is an implementation of a map which is:

  • Immutable, so that I can use it in functional programming, and effortlessly ensure transactions and concurrency.
  • Fast. I've checked out Binary Search Trees (RB, AVL) and Tries, but none of them seemed to be as fast as Hash Tables. Is there a map implementation that supports constant time for updates and retrievals? (or at least very fast logarithmic time)

In short, is there a functional data structure that can compare with Hash Maps in performance?

I haven't read it, but I think some people consider Purely Functional Data Structures as the bible for this kind of thing.

I've become somewhat addicted to using immutable collections (mainly in Clojure, which calls them "persistent data structures"), and would love to be able program this way in some contexts on iOS and OS X.

A key example of where this would be useful is to be able to "change" a dictionary by creating a modified copy, and have change listeners be able to query the difference between the old and new values, rather than try to codify the change as a property change event. Immutable data structures are also a game-changer for concurrent programming: no need for locks.

Yes, you can do this now using the immutable NSArray and NSDictionary instances, but it becomes increasingly inefficient to copy them to make "changed" versions as you have larger and larger collections and/or make changes frequently: a small change to a large data structure then involves a disproportionate amount of work.

I'm looking for a way to enable immutable data programming in Objective-C. To clarify what this might look like, and for some more of the advantages it offers, the research by Phil Bagwell referenced in this SO question is highly relevant.

I don't think there's a shortcut here.

Just as you imply, Clojure's persistent data structures are quite a different thing from the immutable collections classes that Cocoa provides.

If you want to use Clojure's persistent data structures from Obj-C, the only way to do so is to re-implement them in Objective-C. My understand is that many of these are described in Okasaki's book, Purely Functional Data Structures, and in the papers of Phil Bagwell.

This other answer has some links: What is the data structure behind Clojure's sets?.

The following two Haskell programs for computing the n'th term of the Fibonacci sequence have greatly different performance characteristics:

fib1 n =
  case n of
    0 -> 1
    1 -> 1
    x -> (fib1 (x-1)) + (fib1 (x-2))

fib2 n = fibArr !! n where
  fibArr = 1:1:[a + b | (a, b) <- zip fibArr (tail fibArr)]

They are very close to mathematically identical, but fib2 uses the list notation to memoize its intermediate results, while fib1 has explicit recursion. Despite the potential for the intermediate results to be cached in fib1, the execution time gets to be a problem even for fib1 25, suggesting that the recursive steps are always evaluated. Does referential transparency contribute anything to Haskell's performance? How can I know ahead of time if it will or won't?

This is just an example of the sort of thing I'm worried about. I'd like to hear any thoughts about overcoming the difficulty inherent in reasoning about the performance of a lazily-executed, functional programming language.


Summary: I'm accepting 3lectrologos's answer, because the point that you don't reason so much about the language's performance, as about your compiler's optimization, seems to be extremely important in Haskell - more so than in any other language I'm familiar with. I'm inclined to say that the importance of the compiler is the factor that differentiates reasoning about performance in lazy, functional langauges, from reasoning about the performance of any other type.


Addendum: Anyone happening on this question may want to look at the slides from Johan Tibell's talk about high performance Haskell.

Reasoning about performance is generally hard in Haskell and lazy languages in general, although not impossible. Some techniques are covered in Chris Okasaki's Purely Function Data Structures (also available online in a previous version).

Another way to ensure performance is to fix the evaluation order, either using annotations or continuation passing style. That way you get to control when things are evaluated.

In your example you might calculate the numbers "bottom up" and pass the previous two numbers along to each iteration:

fib n = fib_iter(1,1,n)
    where
      fib_iter(a,b,0) = a
      fib_iter(a,b,1) = a
      fib_iter(a,b,n) = fib_iter(a+b,a,n-1)

This results in a linear time algorithm.

Whenever you have a dynamic programming algorithm where each result relies on the N previous results, you can use this technique. Otherwise you might have to use an array or something completely different.

I've just started working my way through Okasaki's Purely Functional Data Structures, but have been doing things in Haskell rather than Standard ML. However, I've come across an early exercise (2.5) that's left me a bit stumped on how to do things in Haskell:

Inserting an existing element into a binary search tree copies the entire search path even though the copied nodes are indistinguishable from the originals. Rewrite insert using exceptions to avoid this copying. Establish only one handler per insertion rather than one handler per iteration.

Now, my understanding is that ML, being an impure language, gets by with a conventional approach to exception handling not so different to, say, Java's, so you can accomplish it something like this:

type Tree = E | T of Tree * int * Tree

exception ElementPresent

fun insert (x, t) = 
  let fun go E = T (E, x, E)
      fun go T(l, y, r) = 
             if      x < y then T(go (l), x, r)
             else if y < x then T(l, x, go (r))
             else    raise ElementPresent
  in go t
  end 
  handle ElementPresent => t

I don't have an ML implementation, so this may not be quite right in terms of the syntax.

My issue is that I have no idea how this can be done in Haskell, outside of doing everything in the IO monad, which seems like cheating and even if it's not cheating, would seriously limit the usefulness of a function which really doesn't do any mutation. I could use the Maybe monad:

data Tree a = Empty | Fork (Tree a) a (Tree a)
        deriving (Show)

insert     :: (Ord a) => a -> Tree a -> Tree a
insert x t = maybe t id (go t)
  where go Empty   = return (Fork Empty x Empty)
    go (Fork l y r)
      | x < y     = do l' <- go l; return (Fork l' y r)
      | x > y     = do r' <- go r; return (Fork l y r')
      | otherwise = Nothing

This means everything winds up wrapped in Just on the way back up when the element isn't found, which requires more heap allocation, and sort of defeats the purpose. Is this allocation just the price of purity?

EDIT to add: A lot of why I'm wondering about the suitability of the Maybe solution is that the optimization described only seems to save you all the constructor calls you would need in the case where the element already exists, which means heap allocations proportional to the length of the search path. The Maybe also avoids those constructor calls when the element already exists, but then you get a number of Just constructor calls equal to the length of the search path. I understand that a sufficiently smart compiler could elide all the Just allocations, but I don't know if, say, the current version of GHC is really that smart.

In terms of cost, the ML version is actually very similar to your Haskell version.

Every recursive call in the ML version results in a stack frame. The same is true in the Haskell version. This is going to be proportional in size to the path that you traverse in the tree. Also, both versions will of course allocate new nodes for the entire path if an insertion is actually performed.

In your Haskell version, every recursive call might also eventually result in the allocation of a Just node. This will go on the minor heap, which is just a block of memory with a bump pointer. For all practical purposes, GHC's minor heap is roughly equivalent in cost to the stack. Since these are short-lived allocations, they won't normally end up being moved to the major heap at all.

I want to maintain an immutable bounded FIFO queue from which I can remove the oldest values after a certain time. In Scala, the immutable.Queue works well for size-bounded queues (.size seems to be O(N) since it's internally based on List, but I can maintain the size separately), but there seems to be no cheap way to access the head element to test the age of the oldest value with anything cheaper than O(N), so I cannot test the expiration state of the oldest entry. Any pointers to a purely functional (immutable) implementation?

This article, Haskell: Queues without pointers, describes a purely functional queue with O(1) amortized cost (edit: for adding and removing elements). I think the data structure comes from Chris Okasaki and more details are in his book.

The basic idea is to decompose the queue into two lists, one for the front and one for the back. New elements are added to "front". "Back" is stored in reverse order, to facilitate popping elements. When all elements of "back" are gone, "front" is reversed and re-identified as "back". This data structure has O(1) amortized cost for these operations, but apparently with some work it can be reduced to O(1), proper.

Edit: Okasaki's paper describes an elegant, purely functional implementation of queues and double-ended queues (deques). Deques allow adding or removing elements from either end. All such operations are O(1), worst case.

I want to reconstruct the incidence structure of a graph in Haskell, which is given by the output of a breadth first traversal of it. Explicitly, the output consists of a root vertex and a list of neighborhoods (a neighborhood is a list of vertices marked as new or old (= already visited)), where each neighborhood corresponds to the least vertex which has not been assigned to a neighborhood, yet.

In any imperative language, I would solve the problem by using a queue:

Input: root vertex r, list of neighborhoods L
(1) Put r into the empty queue Q
(2) if Q is empty then STOP
(3) extract the first vertex v of Q
(4) extract the first neighborhood N of L
(5) append the unvisited vertices of N to Q
(6) remove the markings (new/old) of the nodes of N and assign v to N
(7) goto (2)

I tried to implement this naive algorithm in Haskell (by using a list or by using Data.Sequence as queue), but ghci always runs out of memory. This should not happen, because although the input consists of 300MB data, 16GB RAM should clearly suffice.

Therefore the naive implementation seems to cause a memory leak. How would you implement this algorithm in Haskell?

Edit: Here are the (slightly simplified) data types, I use:

data Output = Out !Vertex ![[BFSNode]]
data Vertex = Vertex Integer SomeMoreComplexData
data BFSNode = New Vertex | Old Integer

data Graph = ![Vertex] ![(Integer,[Integer])]

The data type "Output" contains the already parsed BFS output consisting of the root vertex and the lists of neighborhoods. BFSNode corresponds to a node in the BFS tree which belongs to either a new vertex which is visited for the first time, or to an old vertex which already has been visited and which is therefore referred by its unique number. Note that the parsing process works fine and consumes very few memory.

My aim is to convert "Output" into the data type "Graph" which consists of the lists of vertices and of an incidence list.

Here is a simplified version of my implementation:

readTree :: [[BFSNode]] -> Seq Integer -> Graph
readTree [] _ = Graph [] []
readTree (nb:nbs) qs =
    let (i :< qs') = viewl qs
        newVs = fromList $! map nodeNr . filter isNew $ nb
        (Graph vs adj) = readTree nbs $ qs' >< newVs
    in  Graph (map unNew (filter isNew nb) ++ vs) ((i,nub $ map nodeNr nb):adj)

"nbs" is the list of neighborhoods, "qs" is the queue. The function "nodeNr" extracts the unique identification number from a vertex, "isNew" tests whether a vertex is new, and "unNew" unpacks a new vertex from the data type "BFSNode".

Edit2: I think I localized the problem now. Maybe it has nothing to do with my implementation of the conversion process. My failure was to use the build in function "read" to read the data type "Output" from a file. I realized now that Haskell has problems with reading big files. Even if it were just about reading a list of integers, e.g.

main = do 
    txt <- readFile "test"
    writeFile "test2" . show $ (read txt :: [Integer]) }

the program will run out of memory if the file "test" is big enough. I understand now, that it is no good idea to parse data in this way, since "read" will load all data into the memory before showing any output, but I still do not understand why it fills 16GB of RAM although the file amounts not even 500MB. Do you have any idea what is wrong with "read"? Does Haskell show the same behavior on your machines?

Edit3: Now I implemented a stream based parsing function "readOutput" which takes a String and returns the data type "Output". This function is lazy, so I immediately get an output when I call it. But when I compose it with my conversion function "readTree" (which is clearly tail-recursive) I get no output at all and the memory usage increases as usual. What am I doing wrong?

Edit4: The problem in Edit3 came from some strictifications which I removed now.

This question does not specify a key ingredient - how is the graph going to be represented in Haskell? Functional programs require carefully thought out data structures to maximize sharing and run efficiently. Usually, this means they're recursively built from nothing (inductive). There's a paper on inductive graphs and functional graph algorithms‎ that gives one representation:

module Test where

data Graph a = Empty | Extension (Graph a) [Int] (Int, a)
               deriving Show

That is, a graph is either Empty, or a (smaller) graph extended by one node. This is exactly how lists are built using Cons in functional languages, except that the additional node has to specify the smaller graph, the predecessor links ([Int]), and the new node number and data, (Int,a). Note that they also implemented this as an abstract type ''for efficiency reasons.''

A graph with one node can be generated by extending the empty graph.

singleton :: (Int,a) -> Graph a
singleton x = Extension Empty [] x

Using this structure, it's simple to define a recursive parse algorithm for your BFS tree.

data Mark a = Visited Int | New (Int,a) deriving Show

parse :: (Int,a) -> [[Mark a]] -> Graph a
parse x nbrs = extend Empty [x] nbrs

extend :: Graph a -> [(Int,a)] -> [[Mark a]] -> Graph a
extend g [] [] = g
extend g _  [] = Empty -- leftover nodes, really an error.
extend g [] _  = Empty -- leftover neighborhoods, really an error.
extend g (x : tl) (nbr : nbrs) =
  extend (Extension g (seen nbr) x) (news tl nbr) nbrs

news :: [(Int,a)] -> [Mark a] -> [(Int,a)]
news l (New x : tl) = news (uniq l x) tl
news l (_ : tl) = news l tl
news l [] = l

uniq :: [(Int,a)] -> (Int,a) -> [(Int,a)]
uniq (x:tl) y = x : if (fst x == fst y) then tl else uniq tl y
uniq [] y = [y]

seen :: [Mark a] -> [Int]
seen (Visited i : tl) = i : seen tl
seen (_ : tl) = seen tl
seen [] = []

m0 = [New (1,())]
m1 = [Visited 0, New (2,()), New (3,())]
m2 = [Visited 1, New (3,())]
m3 = [Visited 1, Visited 2]    
nbrs = [m0,m1,m2,m3]

Testing it out,

$ ghci
GHCi, version 7.6.3: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :load Test
[1 of 1] Compiling Test             ( Test.hs, interpreted )
Ok, modules loaded: Test.
*Test> parse (0,()) nbrs
Extension (Extension (Extension (Extension Empty [] (0,())) [0] (1,())) [1] (2,())) [1,2] (3,())

For efficiency, you could do the following:

  1. The news and seen functions could be combined let (ns,sn) = newseen nbr ([],[]) and made tail-recursive (passing their partially constructed lists and returning immediately) for efficiency.

  2. Your input could keep track of the node at the center of each neighbor list. This would avoid the list concatenation in the stack of neighbors. Alternatively, you could use a functional dequeue to hold that stack.

If you haven't seen it, I'd recommend Okasaki's book on purely functional data structures.

Or phrased another way, what kind of benefits do you get from having a basic, singly linked list with only a head pointer? The benefits of a tail pointer that I can see are:

  • O(1) list concatenation
  • O(1) Appending stuff to the right side of the list

Both of which are rather convenient things to have, as opposed to O(n) list concatenation (where n is the length of the left-side list?). What advantages does dropping the tail pointer have?

In addition to what the others said: if you need efficient, but yet immutable data structures (which should be an idiomatic F# way), you have to consider reading Chris Okasaki, Purely Functional Data Structures. There is also a thesis available (on which the book is based).

We have a very nice GoF book (Design Patterns: Elements of Reusable Object-Oriented Software) about patterns in Object Oriented Programming, and plenty of articles and resources in the web on this subject.

Are there any books (articles, resources) on patterns(best practices) for functional programming?

For dynamic programming in languages like Python and Ruby?

For AOP?

There is a Design patten in Ruby.

Beside the design patterns mentioned in GOF, it also list some others pattern like Convention over Configuration .

A related question was asked before: "Does functional programming replace GoF design patterns", with great responses.

The equivalent of "design patterns" is very vague in FP. In general, every time you see a "pattern" in your code you should create something to cover all of its uses in a uniform way. Often it will be a higher-order function.

For example, the following C code

for (int i = 0; i < n; i++)
  if (a[i] == 42)
    return true;
return false;

can be thought of some basic "design pattern" - checking if there's some special element on the list. This snippet could appear many times in code with different conditions. In FP, you simply use a higher order function several times. It's not a "pattern" anymore.

Functional programming has its own practices, but they are much different from "design patterns" in OOP. They include use of polymorphism, lists, higher-order functions, immutability/purity, laziness [not all are essential or specific to FP]... See also "what are core concepts of FP". Also, type classes (Haskell), modules and functors (OCaml), continuations, monads, zippers, finger trees, monoids, arrows, applicative functors, monad transformers, many purely functional data structures (book) etc. Functional pearls, already mentioned by Randall Schulz, form a very rich resource of FP at its best.

To learn how to write idiomatic code, any book/resource on a functional programming language will suffice IMHO (for example, RWH and LYAH); differences between thinking imperatively and functionally are always explained there.

In dynamic languages, Jeff Foster's link is a good collection; here is a very clever use of memoization in JavaScript that could be considered a "design pattern".

Being new to scala and a current java developer, scala was designed to encourage the use of immutability to class design.

How does this translate practically to the design of classes? The only thing that is brought to my mind is case classes. Are case classes strongly encouraged for defining data? Example? How else is immutability encouraged in Scala design of classes?

As a java developer, classes defining data were mutable. The equivalent Scala classes should be defined as case classes?

Well, case classes certainly help, but the biggest contributor is probably the collection library. The default collections are immutable, and the methods are geared toward manipulating collections by producing new ones instead of mutating. Since the immutable collections are persistent, that doesn't require copying the whole collection, which is something one often has to do in Java.

Beyond that, for-comprehensions are monadic comprehensions, which is helpful in doing immutable tasks, there's tail recursion optimization, which is very important in immutable algorithms, and general attention to immutability in many libraries, such as parser combinators and xml.

Finally, note that you have to ask for a var to get some mutability. Parameters are immutable, and val is just as short as var. Contrast this with Java, where parameters are mutable, and you need to add a final keyword to get immutability. Whereas in Scala it is as easy or easier to stay immutable, in Java it is easier to stay mutable.

Addendum

Persistent data structures are data structures that share parts between modified versions of it. This might be a bit difficult to understand, so let's consider Scala's List, which is pretty basic and easy to understand.

A Scala List is composed of two classes, known as cons and Nil. The former is actually written :: in Scala, but I'll refer to it by the traditional name.

Nil is the empty list. It doesn't contain anything. Methods that depend on the list not being empty, such as head and tail throw exceptions, while others work ok.

Naturally, cons must then represent a non-empty list. In fact, cons has exactly two elements: a value, and a list. These elements are known as head and tail.

So a list with three elements is composed of three cons, since each cons will hold only one value, plus a Nil. It must have a Nil because a cons must point to a list. As lists are not circular, then one of the cons must point to something other than a cons.

One example of such list is this:

val list = 1 :: 2 :: 3 :: Nil

Now, the components of a Scala List are immutable. One cannot change neither the value nor the list of a cons. One benefit of immutability is that you never need to copy the collection before passing or after receiving it from some other method: you know that list cannot change.

Now, let's consider what would happen if I modified that list. Let's consider two modifications: removing the first element and prepending a new element.

We can remove one element with the method tail, whose name is not a coincidence at all. So, we write:

val list2 = list.tail

And list2 will point to the same list that list's tail is pointing. Nothing at all was created: we simply reused part of list. So, let's prepend an element to list2 then:

val list3 = 0 :: list2

We created a new cons there. This new cons has a value (a head) equal to 0, and its tail points to list2. Note that both list and list3 point to the same list2. These elements are being shared by both list and list3.

There are many other persistent data structures. The very fact that the data you are manipulating is immutable makes it easy to share components.

One can find more information about this subject on the book by Chris Okasaki, Purely Functional Data Structures, or on his freely available thesis by the same name.

Generally, I have a headache because something is wrong with my reasoning:

  1. For 1 set of arguments, referential transparent function will always return 1 set of output values.

  2. that means that such function could be represented as a truth table (a table where 1 set of output parameters is specified for 1 set of arguments).

  3. that makes the logic behind such functions is combinational (as opposed to sequential)

  4. that means that with pure functional language (that has only rt functions) it is possible to describe only combinational logic.

The last statement is derived from this reasoning, but it's obviously false; that means there is an error in reasoning. [question: where is error in this reasoning?]

UPD2. You, guys, are saying lots of interesting stuff, but not answering my question. I defined it more explicitly now. Sorry for messing up with question definition!

As far as I understand it, referential transparency just means: A given function will always yield the same result when invoked with the same arguments. So, the mathematical functions you learned about in school are referentially transparent.

A language you could check out in order to learn how things are done in a purely functional language would be Haskell. There are ways to use "updateable storage possibilities" like the Reader Monad, and the State Monad for example. If you're interested in purely functional data structures, Okasaki might be a good read.

And yes, you're right: Order of evaluation in a purely functional language like haskell does not matter as in non-functional languages, because if there are no side effects, there is no reason to do someting before/after something else -- unless the input of one depends on the output of the other, or means like monads come into play.

I don't really know about the truth-table question.

I'm building a clustering algorithm in C++, but I don't deal well with OOP and the state of variables (member data) that change. For an algorithm of some complexity, I find this an obstacle to my development.

So, I was thinking in changing the programming language, to one of the functional languages: Ocaml or F#. Apart from having to change my mindset on how to approach programming, there's something that I need to be clarified. In C++, I use a double end queue to slide a window of time through the data. After some period of time, the oldest data is removed and newer data is appended. Data that is not yet too old remains in the double end queue.

Another, and more demanding task, is to compare properties of one of each objects. Each object is the data from a certain period of time. And if I have one thousand data objects at a certain time window, I need to compare each one to between none or twenty or thirty, depending. And some properties of that object being compared may change as a result of this comparison. In C++, I do it all using references, which means that I access objects in memory, that they are never copied, thus the algorithm runs at full speed (for my knowledge of C++).

I've been reading about functional programming, and the idea I get is that each function performs some operation and that original data (the input) is not changed. This means that the language copies the data structure and performs the required transformation. If so, using functional programming will delay the execution of the algorithm a great deal. Is this correct? If not, i.e., if there is a speedy way to perform transformation in data, is it possible to show me how to do it? A very small example would be great.

I'm hoping to have some kind of facility. I've read that both Ocaml and F# are used in research and scientific projects.

At a high level your question is whether using immutable data is slower than using mutable data. The answer to this is yes, it is slower in some cases. What's surprising (to me) is how small the penalty is. In most cases (in my experience) the extra time, which is often a log factor, is worth the extra modularity and clarity of using immutable data. And in numerous other cases there is no penalty at all.

The main reason that it's not as much slower as you would expect is that you can freely reuse any parts of the old data. There's no need to worry that some other part of the computation will change the data later: it's immutable!

For a similar reason, all accesses to immutable data are like references in C++. There's no need to make copies of data, as other parts of the computation can't change it.

If you want to work this way, you need to structure your data to get some re-use. If you can't easily do this, you may want to use some (controlled) mutation.

Both OCaml and F# are mixed-paradigm languages. They allow you to use mutable data if you want to.

The most enlightening account of operations on immutable data (IMHO) is Chris Okasaki's book Purely Functional Data Structures. (This Amazon link is for info only, not necessarily a suggestion to buy the book :-) You can also find much of this information in Okasaki's Phd thesis.

I've had the need for a multi-threaded data structure that supports these claims:

  • Allows multiple concurrent readers and writers
  • Is sorted
  • Is easy to reason about

Fulfilling multiple readers and one writer is a lot easier, but I really would wan't to allow multiple writers.

I've been doing research into this area, and I'm aware of ConcurrentSkipList (by Lea based on work by Fraser and Harris) as it's implemented in Java SE 6. I've also implemented my own version of a concurrent Skip List based on A Provably Correct Scalable Concurrent Skip List by Herlihy, Lev, Luchangco and Shavit.

These two implementations are developed by people that are light years smarter then me, but I still (somewhat ashamed, because it is amazing work) have to ask the question if these are the two only viable implementations of a concurrent multi reader/writer data structures available today?

Sounds to me like your making this problem too hard for yourself. Consider the following:

  • Its pretty easy to implement immutable versions of many data structures, especially trees. Immutable data structures have the benefit that, by virtue of being immutable, one thread can't modify the collection under another threads nose. Immutability = no race conditions = no locks = no deadlocking. Awesomeness.

    See Okasaki's Purely Functional Data Structures, which provides ML and Haskell implementations of heaps, balanced trees, stacks, queues, and some other data structures.

  • Threads can't see changes to an immutable data structure made in other threads. They can, however, notify one another explicitly of changes using message-passing concurrency.

Locks and mutexes are too low-level, and mutable state is pretty much the enemy of multithreaded programming. If you think about whatever problem your trying to solve in terms immutability and message passing, then it'll become 1000x easier for you.

I would like to build an immutable tree data structure representing an arbitrary subset of a filsystem directory structure. There would typically be a filter that knows about include/exclude and I would basically want to have some threading support in the construction.

This sounds like pure nerd fun to code myself, but I am actually wondering if there are any good examples, texts or similar on this topic ? Source code is nice ;)

I have run into problems in writing the code for delete a node from the tree.

given a BST and a key value, find the key in the tree and delete it.

so here is my thought, first, if BST is nil then return nil and if the BST have only one node the root, then return nil as well.

And then if a key in the BST matched the given key, check for the number of leaf that this node have. if the node has no children at all, then recreate a bst from the first predecessor (the root) to the last predecessor of this node, and share all the rest data that was not the predecessor.

if the node have one child, treat as the one without the child, but just add a child to the last predecessor.

for the node has two children, i have to find some node that does not have any children to replace their position.

the hard part comes up when writing the code, i don't really now how to recreate and share the data of the tree.

so can someone offer some hint or clue?

This would be a long answer, so please let me apologize in advance for pointing you towards a book and not directly answering. I highly recommend looking at Purely Functional Data Structures which is (legally) available as a PDF from the author. Though it's a good book to have in print/ebook anyway.

and the super short answer is use Clojure's built in sorted-map if you want this in practice (though writing your own will of course get nerd-street-cred) because sorted maps use a binary red-black-tree under the hood

I've been learning f# in the previous days, writing a small project which, at last, works (with the help of SO, of course).

I'm trying to learn to be as idiomatic as possible, which basically means that I try to not mutate my data structures. This is costing me a lot of effort :-) In my search for idiomatic functional programming, I have been trying to use as much as possible lists, tuples and record, rather than objects. But then "praticality beats purity" and so I'm rewriting my small project using objects this time.

I thought that you could give me some advice, surely my idea of "good functional programming design" is not yet very well defined.

For instance I have to modify the nodes of a tree, modifying at the same time the states at two different levels (L and L+1). I've been able to do that without mutating data, but I needed a lot of "inner" and "helper" functions, with accumulators and so on. The nice feeling of being able to clearly express the algorithm was lost for me, due to the need to modify my data structure in an involved way. This is extremely easy in imperative languages, for instance: just dereference the pointers to the relevant nodes, modify their state and iterate over. Surely I've not designed properly my structure, and for this reason I'm now trying the OOP approach.

I've looked at SICP, at How to design programs and have found a thesis by C. Okasaki ("Purely functional data structures") but the examples on SICP and HTDP are similar to what I did, or maybe I'm not able to understand them fully. The thesis on the other hand is a bit too hard for me at the moment :-)

What do you think about this "tension" which I am experiencing? Am I interpreting the "never mutate data" too strictly? Could you suggest me some resource?

Thanks in advance, Francesco

What do you think about this "tension" which I am experiencing? Am I interpreting the "never mutate data" too strictly? Could you suggest me some resource?

In my opinion, if you're learning functional programming for the first time, its best to start out with zero mutable state. Otherwise, you'll only end up falling back on mutable state as your first resort, and all of your F# code will be C# with a little different syntax.

Regarding data structures, some are easier to express in a functional style than others. Could you provide a description of how you're trying to modify your tree?

For now, I would recommend F# Wikibook's page on data structures to see how data structures are written in a functional style.

I've looked at SICP, at How to design programs and have found a thesis by C. Okasaki ("Purely functional data structures")

I personally found Okasaki's book more readable than the thesis online.

I would like to represent a kind of queue data structure functionally, but I haven't really gotten anywhere. I've looked into Zippers but they don't seem to be the right structure.

Specifically, I'm trying to represent a series of delay lines (for audio effects like echo or reverb), so the functionality needed is as follows:

  1. Append data to the front
  2. Remove the last item (can just be thrown away)

For my particular use, these two operations would be used in conjunction as to keep the queue a constant size, but this constraint is not fundamental. I could just use a list, but I'm thinking there's got to be something cleaner than that. What is the best way to represent this type?

I'm using F#, but any language is welcome.

By functional I assume you mean an immutable queue?

If you use F# and .NET there are for example:

If you like to read on how to implement a functional queue I recommend Purely Functional Data Structures by Chris Okasaki.

One of the first ways Okasaki implements a functional queue is using two List<>, one you pop from and one you push to. When the pop list is empty the push queue is reversed and becomes the pop list.

Bear in mind this is in many ways a rather inefficient queue but it's also rather simple:

type Queue<'T> = 'T list*'T list

let empty<'T> : Queue<'T> = [], []

let isEmpty ((f, r) : Queue<'T>) : bool =
  match f, r with
  | []    , []  -> true
  | _     , _   -> false

let headAndTail ((f, r) : Queue<'T>) : 'T*Queue<'T> =
  match f, r with
  | []    , []  -> failwith "Queue is empty"
  | v::vs , r   -> v, (vs, r)
  | _     , r   -> let v::vs = List.rev r in v, (vs, [])

let snoc ((f, r) : Queue<'T>) (v : 'T) : Queue<'T> = (f, v::r)

let fold (f : 'S -> 'T -> 'S) (s : 'S) (q : Queue<'T>) : 'S =
  let rec loop ss qq =
    if isEmpty qq then ss
    else
      let hh, tt = headAndTail qq
      loop (f ss hh) tt
  loop s q

let ofArray (vs : 'T []) : Queue<'T> = vs |> Array.fold snoc empty

[<EntryPoint>]
let main argv = 
  let q = [| 1..20 |] |> ofArray
  fold (fun _ v -> printfn "%A" v) () q
  0

According to the theory of ADTs (Algebraic Data Types) the concatenation of two lists has to take O(n) where n is the length of the first list. You, basically, have to recursively iterate through the first list until you find the end.

From a different point of view, one can argue that the second list can simply be linked to the last element of the first. This would take constant time, if the end of the first list is known.

What am I missing here ?

It's because of immutable state. A list is an object + a pointer, so if we imagined a list as a Tuple it might look like this:

let tupleList = ("a", ("b", ("c", [])))

Now let's get the first item in this "list" with a "head" function. This head function takes O(1) time because we can use fst:

> fst tupleList

If we want to swap out the first item in the list with a different one we could do this:

let tupleList2 = ("x",snd tupleList)

Which can also be done in O(1). Why? Because absolutely no other element in the list stores a reference to the first entry. Because of immutable state, we now have two lists, tupleList and tupleList2. When we made tupleList2 we didn't copy the whole list. Because the original pointers are immutable we can continue to reference them but use something else at the start of our list.

Now let's try to get the last element of our 3 item list:

> snd . snd $ fst tupleList

That happened in O(3), which is equal to the length of our list i.e. O(n).

But couldn't we store a pointer to the last element in the list and access that in O(1)? To do that we would need an array, not a list. An array allows O(1) lookup time of any element as it is a primitive data structure implemented on a register level.

(ASIDE: If you're unsure of why we would use a Linked List instead of an Array then you should do some more reading about data structures, algorithms on data structures and Big-O time complexity of various operations like get, poll, insert, delete, sort, etc).

Now that we've established that, let's look at concatenation. Let's concat tupleList with a new list, ("e", ("f", [])). To do this we have to traverse the whole list just like getting the last element:

tupleList3 = (fst tupleList, (snd $ fst tupleList, (snd . snd $ fst tupleList, ("e", ("f", [])))

The above operation is actually worse than O(n) time, because for each element in the list we have to re-read the list up to that index. But if we ignore that for a moment and focus on the key aspect: in order to get to the last element in the list, we must traverse the entire structure.

You may be asking, why don't we just store in memory what the last list item is? That way appending to the end of the list would be done in O(1). But not so fast, we can't change the last list item without changing the entire list. Why?

Let's take a stab at how that might look:

data Queue a = Queue { last :: Queue a, head :: a, next :: Queue a} | Empty
appendEnd :: a -> Queue a -> Queue a
appendEnd a2 (Queue l, h, n) = ????

IF I modify "last", which is an immutable variable, I won't actually be modifying the pointer for the last item in the queue. I will be creating a copy of the last item. Everything else that referenced that original item, will continue referencing the original item.

So in order to update the last item in the queue, I have to update everything that has a reference to it. Which can only be done in optimally O(n) time.

So in our traditional list, we have our final item:

List a []

But if we want to change it, we make a copy of it. Now the second last item has a reference to an old version. So we need to update that item.

List a (List a [])

But if we update the second last item we make a copy of it. Now the third last item has an old reference. So we need to update that. Repeat until we get to the head of the list. And we come full circle. Nothing keeps a reference to the head of the list so editing that takes O(1).

This is the reason that Haskell doesn't have Doubly Linked Lists. It's also why a "Queue" (or at least a FIFO queue) can't be implemented in a traditional way. Making a Queue in Haskell involves some serious re-thinking of traditional data structures.

If you become even more curious about how all of this works, consider getting the book Purely Funtional Data Structures.

EDIT: If you've ever seen this: http://visualgo.net/list.html you might notice that in the visualization "Insert Tail" happens in O(1). But in order to do that we need to modify the final entry in the list to give it a new pointer. Updating a pointer mutates state which is not allowed in a purely functional language. Hopefully that was made clear with the rest of my post.

I'm searching for an algorithm (or an argument of such an algorithm) in functional style which is faster than an imperative one.

I like functional code because it's expressive and mostly easier to read than it's imperative pendants. But I also know that this expressiveness can cost runtime overhead. Not always due to techniques like tail recursion - but often they are slower.

While programming I don't think about runtime costs of functional code because nowadays PCs are very fast and development time is more expensive than runtime. Furthermore for me readability is more important than performance. Nevertheless my programs are fast enough so I rarely need to solve a problem in an imperative way.

There are some algorithms which in practice should be implemented in an imperative style (like sorting algorithms) otherwise in most cases they are too slow or requires lots of memory. In contrast due to techniques like pattern matching a whole program like a parser written in an functional language may be much faster than one written in an imperative language because of the possibility of compilers to optimize the code.

But are there any algorithms which are faster in a functional style or are there possibilities to setting up arguments of such an algorithm?

FWIW there are Purely functional data structures, which benefit from functional programming.

There's also a nice book on Purely Functional Data Structures by Chris Okasaki, which presents data structures from the point of view of functional languages.

Another interesting article Announcing Intel Concurrent Collections for Haskell 0.1, about parallel programming, they note:

Well, it happens that the CnC notion of a step is a pure function. A step does nothing but read its inputs and produce tags and items as output. This design was chosen to bring CnC to that elusive but wonderful place called deterministic parallelism. The decision had nothing to do with language preferences. (And indeed, the primary CnC implementations are for C++ and Java.)

Yet what a great match Haskell and CnC would make! Haskell is the only major language where we can (1) enforce that steps be pure, and (2) directly recognize (and leverage!) the fact that both steps and graph executions are pure.

Add to that the fact that Haskell is wonderfully extensible and thus the CnC "library" can feel almost like a domain-specific language.

It doesn't say about performance – they promise to discuss some of the implementation details and performance in future posts, – but Haskell with its "pureness" fits nicely into parallel programming.

Clojure truly piqued my interest, and I started going through a tutorial on it: http://java.ociweb.com/mark/clojure/article.html

Consider these two lines mentioned under "Set":

(def stooges (hash-set "Moe" "Larry" "Curly")) ; not sorted
(def more-stooges (conj stooges "Shemp")) ; -> #{"Moe" "Larry" "Curly" "Shemp"}

My first thought was that the second operation should take constant time to complete; otherwise functional language might have little benefit over an object-oriented one. One can easily imagine a need to start with [nearly] empty set, and populate it and shrink it as we go along. So, instead of assigning the new result to more-stooges, we could re-assign it to itself.

Now, by the marvelous promise of functional languages, side effects are not to be concerned with. So, sets stooges and more-stooges should not work on top of each other ever. So, either the creation of more-stooges is a linear operation, or they share a common buffer (like Java's StringBuffer) which would seem like a very bad idea and conflict with immutability (subsequently stooges can drop an element one-by-one).

I am probably reinventing a wheel here. it seems like the hash-set would be more performant in clojure when you start with the maximum number of elements and then remove them one at a time until empty set as oppose to starting with an empty set and growing it one at a time.

The examples above might not seem terribly practical, or have workarounds, but the object-oriented language like Java/C#/Python/etc. has no problem with either growing or shrinking a set one or few elements at a time while also doing it fast.

A [functional] language which guarantees(or just promises?) immutability would not be able to grow a set as fast. Is there another idiom that one can use which somehow can help avoiding doing that?

For someone familiar with Python, I would mention set comprehension versus an equivalent loop approach. The running time of the two is tiny bit different, but that has to do with relative speeds of C, Python, interpreter and not rooted in complexity. The problem I see is that set comprehension is often a better approach, but NOT ALWAYS the best approach, for the readability might suffer a great deal.

Let me know if the question is not clear.

The core Immutable data structures are one of the most fascinating parts of the language for me also. their is a lot to answering this question and Rich does a really great job of it in this video:

http://blip.tv/file/707974

The core data structures:

  • are actually fully immutable
  • the old copies are also immutable
  • performance does not degrade for the old copies
  • access is constant (actually bounded <= a constant)
  • all support efficient appending, concatenating (except lists and seqs) and chopping

How do they do this???

  • the secret: it's pretty much all trees under the hood (actually a trie).

But what if i really want to edit somthing in place?

  • you can use clojure's transients to edit a structure in place and then produce a immutable version (in constant time) when you are ready to share it.

as a little background: a Trie is a tree where all the common elements of the key are hoisted up to the top of the tree. the sets and maps in clojure use trie where the indexes are a hash of the key you are looking for. it then breaks the hash up into small chunks and uses each chunk as the key to one level of the hash-trie. This allows the common parts of both the new and old maps to be shared and the access time is bounded because there can only be a fixed number of branches because the hash used as in input has a fixed size.

Using these hash tries also helps prevent big slowdowns during the re-balancing used by a lot of other persistent data structures. so you will actually get fairly constant wall-clock-access-time.

I really reccomend the (relativly short)_ book: Purely Functional Data Structures In it he covers a lot of really interesting structures and concepts like "removing amortization" to allow true constant time access for queues. and things like lazy-persistent queues. the author even offers a free copy in pdf here

I am trying to write a tutorial game in Scala & Processing, intending to use as much FP as possible. However, I come to a conclusion that immutable-state game objects are not profitable in such application. If an object is big, it may result in quite intensive memory consumption in case of numerous such objects being constantly updated (therefore, making copies of itself each cycle), for example, using the copy() func. What is the default approach to resolute this? The only thing that I come up with - is to slice the object in tiny pieces-objects so that only those who need to be updated, are updated while leaving the "big" objects same.

First of all, don't do premature optimisations. Have you measured your code? Maybe there is some concrete bottlenecks?

Since most of the objects consist of smaller objects joined via data stuctures, I think you can get rid of this problem by using persistent data structures .

persistent data structure is a data structure which always preserves the previous version of itself when it is modified; such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure

Here is wonderful talk about some of them by Daniel Spiewak. If you want more, take a look on Purely Functional Data Structures by Chris Okasaki.

I see posts like the for-comprehension in [1] and it really makes me wonder what the overall implication of using the immutable Map vs a Mutable one is. It seems like Scala developers are very comfortable with allowing mutations of immutable data structures to incur the cost of a new object- or maybe I'm just missing something. If every mutation operation on an immutable data structure is returning a new instance, though I understand it's good for thread safety, but what if i know how to fine-tune my mutable objects already to make these same guarantees?

[1] In Scala, how can I do the equivalent of an SQL SUM and GROUP BY?

The question is very generic, so it is hard to give a definite answer. It seems that you are just uncomfortable with the amount of object allocation happening in idiomatic scala code using for comprehensions and the like.

The scala compiler does not do any special magic to fuse operations or to elide object allocations. It is up to the person writing the data structure to make sure that functional data structures reuse the as much as possible from previous versions (structural sharing). Many of the data structures used in scala collections do this reasonably well. See for example this talk about Functional Data Structures in Scala to give you a general idea.

If you are interested in the details, the book to get is Purely Functional Data Structures by Chris Okasaki. The material in this book applies also to other functional languages like Haskell and OCaml and Clojure.

The JVM is extremely good at allocating and collecting short-lived objects. So many things that seem outrageously inefficient to somebody accustomed to low level programming are actually surprisingly efficient. But there are definitely situations where mutable state has performance or other advantages. That is why scala does not forbid mutable state, but only has a preference towards immutability. If you find that you really need mutable state for performance reasons, it is usually a good idea to wrap your mutable state in an akka actor instead of trying to get low-level thread synchronization right.

Let's say we have existing tree-like data and we would like to add information about depth of each node. How can we easily achieve that?

Data Tree = Node Tree Tree | Leaf

For each node we would like to know in constant complexity how deep it is. We have the data from external module, so we have information as it is shown above. Real-life example would be external HTML parser which just provides the XML tree and we would like to gather data e.g. how many hyperlinks every node contains.

Functional languages are created for traversing trees and gathering data, there should be an easy solution.

Obvious solution would be creating parallel structure. Can we do better?

The standard trick, which I learned from Chris Okasaki's wonderful Purely Functional Data Structures is to cache the results of expensive operations at each node. (Perhaps this trick was known before Okasaki's thesis; I don't know.) You can provide smart constructors to manage this information for you so that constructing the tree need not be painful. For example, when the expensive operation is depth, you might write:

module SizedTree (SizedTree, sizedTree, node, leaf, depth) where

data SizedTree = Node !Int SizedTree SizedTree | Leaf

node l r = Node (max (depth l) (depth r) + 1) l r
leaf = Leaf

depth (Node d _ _) = d
depth Leaf = 0

-- since we don't expose the constructors, we should
-- provide a replacement for pattern matching
sizedTree f v (Node _ l r) = f l r
sizedTree f v Leaf = v

Constructing SizedTrees costs O(1) extra work at each node (hence it is O(n) work to convert an n-node Tree to a SizedTree), but the payoff is that checking the depth of a SizedTree -- or of any subtree -- is an O(1) operation.

Right now I have classes like:

abstract class Record {
  // Required fields
  val productCode:Option[String]
  val price:Option[Double]

  // Optional fields
  val notes:Option[String] = None
  val used:Option[Boolean] = Option(false)
}

Then create them:

val r = new Record {
  override val productCode = Option("abc")
  override val price = Option(32.12)
}

A few things to note:

  1. I use Option for the un-optional fields so that a. I don't have to remember which fields are optional b. I can change which fields are optional without changing my interface
  2. The Option stuff adds a lot of noise. I'd love for that not to be there, but I also don't want to use nulls. This is particularly true when taking into account all the calls to getOrElse when I'm using the structure. (I bet there's a clever way for the language to declaratively autobox these.)
  3. This makes mass assignment (which I'm doing because I have an array of fields) difficult if a subclass mixes new fields in, e.g.:

    override val List(productCode, price, discount) = fields // fields is a List

will not compile because discount is not defined in the superclass and therefor not an override. I'm not sure if there is a way to do this.

My main question is:

  1. Is there a better overall way to manage immutable data structures?
  2. Is there a straightforward way to copy a record and change just one value without writing boilerplate code?

e.g. (pseudocode}:

val r2 = r.clone { override val used = true }

I have heard 2.8 introduces something like this for case classes, however in a language that encourages immutable data structures, I'd be surprised to find out this is not easier before 2.8. I'm still in 2.7.

There is no easy way to clone instances. FWIW, immutable data structures are usually deep. For instance, the List class has only two members: hd and tl. A list grows by chaining members.

You clone such structures by creating the minimum amount of new data structures, and refencing as much of the old data structure as possible. Usually, this is done through recursion.

You learn more about this in the book Purely Functional Data Structures. The paper on which the book is based is freely available.

You can look up Scala questions here to see interesting ways to handle Option data. Unfortunately, I don't have any solutions to your other concerns.

Is there an algorithm that implements a purely functional set?

Expected operations would be union, intersection, difference, element?, empty? and adjoin.

Those are not hard requirements though and I would be happy to learn an algorithm that only implements a subset of them.

On Haskell, most numeric types are natives - Int, Float, Word32 etc. There is also a popular representation of unary natural numbers with ADTs only - that is the Peano encoding:

data Nat = Succ Nat | Zero

That datatype, while elegant, is not very efficient. Multiplication, exponentiation, division with unary numbers are unpractical. My question is: if we didn't have native types to count on, what would be the most efficient representation of numbers - nats, ints, fracs, complex, etc. - in a pure functional language such as Haskell? What would the datatypes and the respective algorithms look like?

It depends very much on what you want to do with the numbers and what do you mean by most efficient.

If you want to represent a natural number n, you need log n bits of information. And since an ADT can have only finitely many distinct constructors, it encodes a finite number of bits, so you need to have structure with at least log n nodes.

I recommend you very much Chapter Numerical Representations from Chris Okasaki's Functional Data Structures (the thesis is available online here). It describes various tree-like data structures, supporting different sets of operations, and how they relate to natural numbers. Everything below is what I learned from the book.

Expanding on Cirdec's comment: You can define

data N = Zero | Positive
data Positive = One | Add Positive Positive

This gives you O(1) addition and subtracting by one. On the other hand, the size of the structure will be O(n).

You can use a binary representation with O(log n) space, but then addition will be O(log n):

data N = Zero | Positive
data Positive = One | Twice Positive | TwicePlusOne Positive

increment and decrement will be almost amortized O(1). A sequence of increments will go to depth d only every 2^d operations, so in average, each increment will be O(1). Similarly for decerements. I said almost above, because if you interchange increments and decrements, then you can flip between a O(log n) increment and decrement operations. A solution to this is to add some redundancy:

data N = Zero | Positive
data Positive = One | Twice Positive | TwicePlusOne Positive | TwicePlusTwo Positive

Now every time an operation needs to go one level deeper, it leaves the current node at TwicePlusOne, which means the next operation affecting the node will stop at it, regardless if it's an increment or decrement.

If you want constant time addition, the data structure can be extended for that (look for Lists With Efficient Catenation in the book), but then again you can end up with O(n) memory, if the sequence of operations is bad. There is an open SO question How can natural numbers be represented to offer constant time addition? asking if it's possible to get both.

I'm planning to invest some time every week studying data structures and algorithms.
Do you recommend: "MIT Introduction to Algorithms, 3rd Edition" by Cormen, Leiseson, Rivest and Stein?
AFAIK this book is legendary but I don't know its target audience.

Is this book suitable for my purpose? or it is for academic studies? is it loaded with heavy math?

For Java I recommend Algorithms in Java, Parts 1-4 by Robert Sedgewick. And the companion book Algorithms in Java, Part 5: Graph Algorithms by Robert Sedgewick.

For general studies I also have the Introductions to Algorithms books, it is a good general reference. This Algorithms, Fourth Edition by Robert Sedgewick looks good as well, but probably covers a lot of stuff already in the previously mentioned books.

For Clojure, you will probably need to get a Functional based Algorithm book. Pearls of Functional Algorithm Design looks like it might be a good companion to a the more general procedural books.

I read Computer Algorithms by Horowitz and Sahni, its quite easy to follow with enough examples and pseudo codes.

In addition to Cormen, I'd recommend reading Purely Functional Data Structures, if you're using both Java and Clojure.

I'm reading tutorial for OCaml by Jason Hickey and here is in short the proposed way of building a tree:

type 'a elem = Empty | Node of 'a * 'a elem * 'a elem;;

let rec insert x = function
    | Empty -> Node (x, Empty, Empty)
    | Node (y, left, right) as node ->
        if x < y then
            Node (y, insert x left, right)
        else if x > y then
            Node (y, left, insert x right)
        else
            node;;

Do I understand correctly that this approach makes a copy of the part of the tree where new element is inserted and attaches part of the old tree to this new copy?

If so, is my assessment that each insertion creates only O(height(tree)) nodes proper?

Does this (bit unusual to me) method rely on fact that if inserting many values one-by-one, all older copies of groups of nodes would be efficiently deleted by GC?

Do I understand correctly that this approach makes a copy of the part of the tree where new element is inserted and attaches part of the old tree to this new copy?

If so, is my assessment that each insertion creates only O(height(tree)) nodes proper?

Yes. If you balance the tree properly (e.g. Red-Black trees) then this means insertion is O(log(n)).

Does this (bit unusual to me) method rely on fact that if inserting many values one-by-one, all older copies of groups of nodes would be efficiently deleted by GC?

Yes. Functional programming languages typically produce a lot of short-lived garbage, e.g. tuples, closures, and small data type values. But implementations are optimised to make this very cheap (e.g. via a light-weight heap representation, pointer-bump allocation, and generational collection).

Note also that there is one fundamental advantage with this approach: functional data structures are automatically persistent, i.e. the old version stays valid, and multiple versions of a data structure can be used at the same time. With imperative data structures you have two options when you need to "restore" an old version: (1) copying the whole structure beforehand, or (2) maintaining a change log and run that backwards. Both options are often more expensive than using a functional structure, where persistence comes for free.

See Chris Okasaki's excellent book on Purely Functional Data Structures for a detailed discussion of complexity, amortised costs and various other aspects of various functional data structures. (His thesis covers most of the same content and is freely available.)

Is there a standard queue implementation for Haskell? I see several reasonably mature priority queue implementations, but no simple queues. Data.Sequence seems OK, but I assume we could get better performance with a more restricted datatype. Further, restricting the operations (ie, not a deque) can prevent bugs from dequeuing from the wrong end.

Edit:

To clarify, I was hoping for a mature Haskell implementation, preferably in Haskell Platform or Hackage.

Okasaki in his book Purely Functional Data Structures, describes a FIFO queue as a pair of lists, front and back, where the front list contains the front elements of the queue in the correct order, and the back list contains the rear elements of the queue in the reverse order.

data Queue a = Queue [a] [a]  -- front & back lists

The idea is that new items are inserted to front of the back list, whereas values are popped out from the front list. If the front list becomes empty it is replaced by the reverse of the back list.

The queue maintains the invariance that the front list can be empty only if the back list is also empty; and performs amortized O(1).

-- helper function to maintain the invariance:
-- front list can be empty only if the back list is also empty
fill :: Queue a -> Queue a
fill (Queue [] b) = Queue (reverse b) []
fill q = q

push :: a -> Queue a -> Queue a
push x (Queue f b) = fill $ Queue f (x:b)

front :: Queue a -> Maybe a
front (Queue (x:_) _) = Just x
front _               = Nothing

pop :: Queue a -> Maybe (Queue a)
pop (Queue (_:xs) b) = Just . fill $ Queue xs b
pop _                = Nothing

I want to implement search using BFS. The Algorithm say that i must use a queue to get FIFO effect. I read Chris Okasaki's Purely Functional Data Structures book and found how to make a queue (i wrote using F#) :

type 'a queue = 'a list * 'a list
let emtpy = [],[]
let isEmpty = function
    | [],_ -> true
    | _ -> false

let checkf = function
    | [],r -> List.rev r,[]
    | q -> q

let snoc (f,r) x = checkf (f,x :: r)

let head = function
    | ([],_) -> failwith "EMPTY"
    | (x::f,r) -> x

let tail = function
    | ([],_) -> failwith "EMPTY"
    | (x::f,r) -> checkf (f,r)

anyone know how to implement this to BFS?

and i have this code to make a tree from a list:

let data = [4;3;8;7;10;1;9;6;5;0;2]

type Tree<'a> = 
    | Node of Tree<'a> * 'a * Tree<'a>
    | Leaf

let rec insert tree element = 
    match element,tree with
    | x,Leaf        -> Node(Leaf,x,Leaf)
    | x,Node(l,y,r) when x <= y -> Node((insert l x),y,r)
    | x,Node(l,y,r) when x > y -> Node(l,y,(insert r x))
    | _ -> Leaf

let makeTree = List.fold insert Leaf data

(want to combine these two codes)

OCaml is functional, so in many cases, all the data are immutable, which means it constantly creates new data, or copying data to new memory, etc.

However, it has the reputation of being fast.

Quite a number of talks about OCaml always say although it constantly creates new things, it is still fast. But I can't find anywhere explaining why.

Can someone summarise why it is fast even with functional way?

Also, you should know that copies are not made nearly as often as you might think. Only the changed part of an immutable data structure has to be updated. For example, say you have an immutable set x. You then define y to be x with one additional item in it. The set y will share most of its underlying representation with x even though semantically x and y are completely different sets. The usual reference for this is Okasaki's Purely Functional Data Structures.

While studying Learn You A Haskell For Great Good and Purely Functional Data Structures, I thought to try to reimplement a Red Black tree while trying to structurally enforce another tree invariant.

Paraphrasing Okasaki's code, his node looks something like this:

import Data.Maybe

data Color = Red | Black

data Node a = Node {
    value :: a,
    color :: Color,
    leftChild :: Maybe (Node a),
    rightChild :: Maybe (Node a)}

One of the properties of a red black tree is that a red node cannot have a direct-child red node, so I tried to encode this as the following:

import Data.Either

data BlackNode a = BlackNode {
    value :: a,
    leftChild :: Maybe (Either (BlackNode a) (RedNode a)),
    rightChild :: Maybe (Either (BlackNode a) (RedNode a))}
data RedNode a = RedNode {
    value :: a,
    leftChild :: Maybe (BlackNode a),
    rightChild :: Maybe (BlackNode a)}

This outputs the errors:

Multiple declarations of `rightChild'
Declared at: :4:5
             :8:5


Multiple declarations of `leftChild'
Declared at: :3:5
             :7:5


Multiple declarations of `value'
Declared at: :2:5
             :6:5

I've tried several modifications of the previous code, but they all fail compilation. What is the correct way of doing this?

Different record types must have distinct field names. E.g., this is not allowed:

data A = A { field :: Int }
data B = B { field :: Char }

while this is OK:

data A = A { aField :: Int }
data B = B { bField :: Char }

The former would attempt to define two projections

field :: A -> Int
field :: B -> Char

but, alas, we can't have a name with two types. (At least, not so easily...) This issue is not present in OOP languages, where field names can never be used on their own, but they must be immediately applied to some object, as in object.field -- which is unambiguous, provided we already know the type of object. Haskell allows standalone projections, making things more complicated here.

The latter approach instead defines

aField :: A -> Int
bField :: B -> Char

and avoids the issue.

As @dfeuer comments above, GHC 8.0 will likely relax this constraint.

Does anyone know a good book on datastructures where tree's are explained in depth. Language C# or Java.

Thanks, G

There are lots of books out there on data structures but if you really want to know tree structures I recommend you check out Purely Functional Data Structures.

I know you asked for C#/Java but Tree structures are much more elegantly explained in an FP language.