Book Image

Learning Functional Data Structures and Algorithms

By : Raju Kumar Mishra
Book Image

Learning Functional Data Structures and Algorithms

By: Raju Kumar Mishra

Overview of this book

Functional data structures have the power to improve the codebase of an application and improve efficiency. With the advent of functional programming and with powerful functional languages such as Scala, Clojure and Elixir becoming part of important enterprise applications, functional data structures have gained an important place in the developer toolkit. Immutability is a cornerstone of functional programming. Immutable and persistent data structures are thread safe by definition and hence very appealing for writing robust concurrent programs. How do we express traditional algorithms in functional setting? Won’t we end up copying too much? Do we trade performance for versioned data structures? This book attempts to answer these questions by looking at functional implementations of traditional algorithms. It begins with a refresher and consolidation of what functional programming is all about. Next, you’ll get to know about Lists, the work horse data type for most functional languages. We show what structural sharing means and how it helps to make immutable data structures efficient and practical. Scala is the primary implementation languages for most of the examples. At times, we also present Clojure snippets to illustrate the underlying fundamental theme. While writing code, we use ADTs (abstract data types). Stacks, Queues, Trees and Graphs are all familiar ADTs. You will see how these ADTs are implemented in a functional setting. We look at implementation techniques like amortization and lazy evaluation to ensure efficiency. By the end of the book, you will be able to write efficient functional data structures and algorithms for your applications.
Table of Contents (20 chapters)
Learning Functional Data Structures and Algorithms
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Functional heaps


Note that if we observe the heap invariant--that is, root elements are never greater than child values--on a leftist tree, we will get a leftist heap.

The preceding tree, for example, is a leftist heap. A notable property of a leftist heap is any path is a sorted list. This helps us efficiently merge the tree after a pop operation.

Where does merge come from? We know the minimum element is at the root. So, when we pop or remove the root element, we will get two leftist trees. If we merge these two, we get back to a sane state (invariants restored) and get the next version of the persistent heap.

The case for inserting a new node could be expressed as a merge again. The new node can be looked at as a singleton tree (a tree with just one node). This is merged with the existing tree, which adds the new node as a result of this. Here comes the code:

sealed abstract class TreeNode { 
  def rank: Int 
} 

The sealed keyword makes sure we know all the subclasses, as these...