Book Image

Learning Functional Data Structures and Algorithms

By : Raju Kumar Mishra
Book Image

Learning Functional Data Structures and Algorithms

By: Raju Kumar Mishra

Overview of this book

Functional data structures have the power to improve the codebase of an application and improve efficiency. With the advent of functional programming and with powerful functional languages such as Scala, Clojure and Elixir becoming part of important enterprise applications, functional data structures have gained an important place in the developer toolkit. Immutability is a cornerstone of functional programming. Immutable and persistent data structures are thread safe by definition and hence very appealing for writing robust concurrent programs. How do we express traditional algorithms in functional setting? Won’t we end up copying too much? Do we trade performance for versioned data structures? This book attempts to answer these questions by looking at functional implementations of traditional algorithms. It begins with a refresher and consolidation of what functional programming is all about. Next, you’ll get to know about Lists, the work horse data type for most functional languages. We show what structural sharing means and how it helps to make immutable data structures efficient and practical. Scala is the primary implementation languages for most of the examples. At times, we also present Clojure snippets to illustrate the underlying fundamental theme. While writing code, we use ADTs (abstract data types). Stacks, Queues, Trees and Graphs are all familiar ADTs. You will see how these ADTs are implemented in a functional setting. We look at implementation techniques like amortization and lazy evaluation to ensure efficiency. By the end of the book, you will be able to write efficient functional data structures and algorithms for your applications.
Table of Contents (20 chapters)
Learning Functional Data Structures and Algorithms
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Amortization


To better understand the concept of amortization, let's look at a dynamic array. This is an array that would grow if there is no space to add a new element. We could do this as follows:

  1. Allocate a new array double the size of the current array.

  2. Copy all the elements from the current array to the new array.

  3. Make the new array the current array.

Here is a sample run of the algorithm depicted pictorially:

This allocation and copying obviously incur O(n) cost once in a while. If most of the elements incur a O(1) cost, we should be fine though.

If you continue to trace the growth of this array, you will soon realize that the allocate/copy operations occur less frequently as the number of elements grow.

Most of the insert operations would have O(1) complexity. In other words, an insertion would complete in amortized constant time.