Home

Lock-free data structures: the free list

[Click to print this article]

Copper Mountain in summer

Notes: Although the information about the .NET memory manager is correct in this article, in that it may block on an allocation (in fact, in the simplest allocation scenario, object allocations are lock-free), the implementation of the free list is broken. The reason is that nodes used by the free list will tend to suffer from the ABA problem. DO NOT USE THIS FREE LIST CODE. The code you can download does not use free lists at all.

------

In my previous article in this series on lock-free data structures, I'd implemented a lock-free queue. In doing so, I showed that despite the simplicity of the underlying data structure, making it perform in a lock-free manner in a multithreaded application was no mean feat.

In my closing remarks though, having breezily talked through the design and the implementation of the non-blocking queue, I briefly noted that there was a problem with the dequeuing operation (and indeed there was the same problem with the popping operation in the stack): there was no guarantee that a node had been finished with once the underlying linked list had been updated.

Let's look at the stack popping code again (it's simpler to understand the issue).

public T Pop() {
  SingleLinkNode<T> node;
  do {
    node = head.Next;
    if (node == null)
      return default(T);
  } while (!SyncMethods.CAS<SingleLinkNode<T>>(ref head.Next, node, node.Next));
  return node.Item;
}

In the loop, we access the head node's Next value and store it in a local variable. We then try and updated the head node's Next reference with the node's Next reference, thereby advancing the pointer to the first node in the linked list. If successful we then return the node's Item value (and, in a non-garbage-collected environment, we'd dispose of the node itself).

Imagine the following scenario: after we read the head.Next value into our local variable, this thread gets swapped out for some appreciable amount of time. (I'm not talking hours here, but, say, several milliseconds, which is an eternity when looked at from the viewpoint of our CPUs.)

During this swapped-out time, another thread may pop the node and return the item. In a garbage-collected environment like .NET we don't particularly care: the node we're holding onto won't get garbage collected since we have a reference to it. In a non-garbage-collected environment though, oh boy, the node we're holding onto may have been freed already. And therein lies a big problem for those sorts of environments: the very next line we execute (the CAS statement), we'll be dereferencing something that may no longer exist, with the attendant kablooey not so far behind.

This is a real problem with lock-free data structures in non-garbage-collected environments, one that we in C# (or Java) don't have to worry about. We cannot free the nodes that we use in our linked list because we cannot guarantee that every thread will have finished with them.

Again, I repeat, with .NET we don't have to worry about such things. By holding a reference to a node, we can guarantee that the node won't be garbage collected. We can't guarantee that the values of the fields won't change (that is, Next and Item), just that when we access them we won't cause an error.

Nevertheless, as I mentioned last time, we are still implementing a lock-free data structure (the stack and the queue) using a locking memory manager, and that's an issue, be the environment .NET or not.

The solution, which as it happens also solves the problem in non-garbage-collected environments, is to use a free list to hold the "freed" nodes (or, rather, the nodes that are no longer needed). Once we have a free list, we can allocate new nodes from this list instead of from the memory manager. And of course the free list must be lock-free.

The interesting thing is that we can most easily implement a free list using a stack. And we've already explored how to write a lock-free stack. And the items we'll be storing in this stack are already nodes. So, the lock-free free list is pretty easy to write. Here's the code.

  internal class FreeList<T> {

    private SingleLinkNode<T> head;

    public FreeList() {
      head = new SingleLinkNode<T>();
    }

    public void FreeNode(SingleLinkNode<T> node) {
      node.Item = default(T);
      do {
        node.Next = head.Next;
      } while (!SyncMethods.CAS<SingleLinkNode<T>>(ref head.Next, node.Next, node));
    }

    public SingleLinkNode<T> GetNode() {
      SingleLinkNode<T> node;
      do {
        node = head.Next;
        if (node == null)
          return new SingleLinkNode<T>();
      } while (!SyncMethods.CAS<SingleLinkNode<T>>(ref head.Next, node, node.Next));
      node.Next = null;
      return node;
    }
  }

A couple of interesting things to note. First, when FreeNode() is called, we clear the Item field of the node (this will help the garbage collector out if T happened to be a reference type since we're removing the reference). Second, if the free list is empty on GetNode() we just allocate a new node from the memory manager. So we lock infrequently, especially after the data structure using the free list has been used for a while (and we have a populated free list).

Now that we have a free list, we can now update our original stack and queue to use one.

  public class LockFreeStack<T> {
    ...
    private FreeList<T> freeList;

    public LockFreeStack() {
      ...
      freeList = new FreeList<T>();
    }

    public void Push(T item) {
      SingleLinkNode<T> node = freeList.GetNode();
      ...
    }

    public T Pop() {
      ...      
      T result = node.Item;
      freeList.FreeNode(node);
      return result;
    }
  }


  public class LockFreeQueue<T> {
    ...
    FreeList<T> freeList;

    public LockFreeQueue() {
      ...
      freeList = new FreeList<T>();
    }

    public void Enqueue(T item) {
      ...
      // create and initialize the new node
      SingleLinkNode<T> node = freeList.GetNode();
      node.Item = item;
      ...
    }

    public T Dequeue() {
      T result = default(T);
      SingleLinkNode<T> oldHead = null;
      ..loop code..
      freeList.FreeNode(oldHead);
      return result;
    }
  }

Next time we'll wrap up with a lock-free priority queue (which is where we started) and I'll have some closing comments.

Here's a bunch of links for you to tie all this lock-free stuff up.