Book Image

Learning Concurrent Programming in Scala - Second Edition

By : Aleksandar Prokopec
Book Image

Learning Concurrent Programming in Scala - Second Edition

By: Aleksandar Prokopec

Overview of this book

Scala is a modern, multiparadigm programming language designed to express common programming patterns in a concise, elegant, and type-safe way. Scala smoothly integrates the features of object-oriented and functional languages. In this second edition, you will find updated coverage of the Scala 2.12 platform. The Scala 2.12 series targets Java 8 and requires it for execution. The book starts by introducing you to the foundations of concurrent programming on the JVM, outlining the basics of the Java Memory Model, and then shows some of the classic building blocks of concurrency, such as the atomic variables, thread pools, and concurrent data structures, along with the caveats of traditional concurrency. The book then walks you through different high-level concurrency abstractions, each tailored toward a specific class of programming tasks, while touching on the latest advancements of async programming capabilities of Scala. It also covers some useful patterns and idioms to use with the techniques described. Finally, the book presents an overview of when to use which concurrency library and demonstrates how they all work together, and then presents new exciting approaches to building concurrent and distributed systems. Who this book is written for If you are a Scala programmer with no prior knowledge of concurrent programming, or seeking to broaden your existing knowledge about concurrency, this book is for you. Basic knowledge of the Scala programming language will be helpful.
Table of Contents (19 chapters)
Learning Concurrent Programming in Scala - Second Edition
About the Author
About the Reviewers
Customer Feedback

The need for reactors

As you may have concluded by reading this book, writing concurrent and distributed programs is not easy. Ensuring program correctness, scalability, and fault-tolerance is harder than in a sequential program. Here, we recall some of the reasons for this:

  • First of all, most concurrent and distributed computations are, by their nature, non-deterministic. This non-determinism is not a consequence of poor programming abstractions, but is inherent in systems that need to react to external events.

  • Data races are a basic characteristic of most shared-memory multicore systems. Combined with inherent non-determinism, these lead to subtle bugs that are hard to detect or reproduce.

  • When it comes to distributed computing, things get even more complicated. Random faults, network outages, or interruptions, present in distributed programming, compromise correctness and robustness of distributed systems.

  • Furthermore, shared-memory programs do not work in distributed environments, and existing...