Book Image

JavaScript Concurrency

By : Adam Boduch
Book Image

JavaScript Concurrency

By: Adam Boduch

Overview of this book

Concurrent programming may sound abstract and complex, but it helps to deliver a better user experience. With single threaded JavaScript, applications lack dynamism. This means that when JavaScript code is running, nothing else can happen. The DOM can’t update, which means the UI freezes. In a world where users expect speed and responsiveness – in all senses of the word – this is something no developer can afford. Fortunately, JavaScript has evolved to adopt concurrent capabilities – one of the reasons why it is still at the forefront of modern web development. This book helps you dive into concurrent JavaScript, and demonstrates how to apply its core principles and key techniques and tools to a range of complex development challenges. Built around the three core principles of concurrency – parallelism, synchronization, and conservation – you’ll learn everything you need to unlock a more efficient and dynamic JavaScript, to lay the foundations of even better user experiences. Throughout the book you’ll learn how to put these principles into action by using a range of development approaches. Covering everything from JavaScript promises, web workers, generators and functional programming techniques, everything you learn will have a real impact on the performance of your applications. You’ll also learn how to move between client and server, for a more frictionless and fully realized approach to development. With further guidance on concurrent programming with Node.js, JavaScript Concurrency is committed to making you a better web developer. The best developers know that great design is about more than the UI – with concurrency, you can be confident every your project will be expertly designed to guarantee its dynamism and power.
Table of Contents (17 chapters)
JavaScript Concurrency
About the Author
About the Reviewer

Types of concurrency

JavaScript is a run-to-completion language. There's no getting around it, despite any concurrency mechanisms that are thrown on top of it. In other words, our JavaScript code isn't going to yield control to another thread in the middle of an if statement. The reason this matters is so that we can pick a level of abstraction that makes sense in helping us think about JavaScript concurrency. Let's look at the two types of concurrent actions found in our JavaScript code.

Asynchronous actions

A defining characteristic of asynchronous actions is that they do not block other actions that follow. Asynchronous actions don't necessarily mean fire-and-forget. Rather, when the part of the action we're waiting on completes, we run a callback function. This callback function falls out of sync with the rest of our code; hence, the term asynchronous.

In web front-ends, this generally means fetching data from a remote service. These fetching actions are relatively slow, because they have to traverse the network connection. It makes sense for these actions to be asynchronous, just because our code is waiting on some data to return so that it can fire a callback function, doesn't mean the user should have to sit around and wait too. Furthermore, it's unlikely that any screen that the user is currently looking at depends on only one remote resource. So, serially processing multiple remote fetch requests would have a detrimental effect on the user experience.

Here's a general idea of what asynchronous code looks like:

var request = fetch('/foo');

request.addEventListener((response) => {
    // Do something with "response" now that it has arrived.

// Don't wait with the response, update the DOM immediately.


Downloading the example code

You can download the example code files from your account at for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

We're not limited to fetching remote data, as the single source of asynchronous actions. When we make network requests, these asynchronous control flows actually leave the browser. But what about asynchronous actions that are confined within the browser? Take the setTimeout() function as an example. It follows the same callback pattern that's used with network fetch requests. The function is passed a callback, which is executed at a later point. However, nothing ever leaves the browser. Instead, the action is queued behind any number of other actions. This is because asynchronous actions are still just one thread of control, executed by one CPU. This means that as our applications grow in size and complexity, we're faced with a concurrency scaling issue. But then, maybe asynchronous actions weren't meant to solve the single CPU problem.

Perhaps a better way to think about asynchronous actions performed on a single CPU is to picture a juggler. The juggler's brain is the CPU, coordinating his motor actions. The balls that get tossed around is the data our actions operate on. There's only two fundamental actions we care about—toss and catch:

Since the juggler only has one brain, he can't possibly devote his mental capacity to perform more than one task at a time. However, the juggler is experienced and knows he doesn't need more than a tiny fraction of attention given to the toss or catch actions. Once the ball is in the air, he's free to return his attention to catching the ball that's about to land.

To anyone observing this juggler in action, it appears as though he's paying full attention to all six balls, when in reality, he's ignoring five of them at any point in time.

Parallel actions

Like asynchronicity, parallelism allows control flow to continue without waiting on actions to complete. Unlike asynchronicity, parallelism depends on hardware. This is because we can't have two or more flows of control taking place in parallel on a single CPU. However, the main aspect that sets parallelism apart from asynchronicity is the rationale for using it. The two approaches to concurrency solve different problems, and both require different design principles.

At the end of the day, we want to perform actions in parallel that would otherwise be time prohibitive, if performed synchronously. Think about a user who is awaiting three expensive actions to complete. If each takes 10 seconds to complete (an eternity on a UX timescale), then this means the user will have to wait for 30 seconds. If we're able to perform these tasks in parallel, we can bring the aggregate wait time closer to 10 seconds. We get more for less, leading to a performant user interface.

None of this is free. Like asynchronous actions, parallel actions lead to callbacks as a communication mechanism. In general, designing for parallelism is hard, because in addition to communicating with worker threads, we have to worry about the task at hand, that is, what are we hoping to achieve by using worker threads? And how do we break down our problem into smaller actions? The following is a rough idea of what our code starts to look like when we introduce parallelism:

var worker = new Worker('worker.js');
var myElement = document.getElementById('myElement');

worker.addEventListener('message', (e) => {
    myElement.textContent = 'Done working!';

myElement.addEventListener('click', (e) => {

Don't worry about the mechanics of what's happening with this code, as they'll all be covered in depth later on. The takeaway is that as we throw workers into the mix, we add more callbacks to an environment that's already polluted with them. This is why we have to design for parallelism in our code, which is a major focus of this book, starting in Chapter 5, Working with Workers.

Let's think about the juggler analogy from the preceding section. The toss and catch actions are performed asynchronously by the juggler; that is, he has only one brain/CPU. But suppose the environment around us is constantly changing. There's a growing audience for our juggling act and a single juggler can't possibly keep them all entertained:

The solution is to introduce more jugglers to the act. This way we add more computing power capable, of performing multiple toss and catch actions in the same instant. This simply isn't possible with a single juggler running asynchronously.

We're not out of the woods yet, because we can't just have the newly-added jugglers stand in one place, and perform their act the same way our single juggler did. The audience is larger, more diverse, and needs to be entertained. The jugglers need to be able to handle different items. They need to move around on the floor so that the various sections of the audience are kept happy. They might even start juggling with each other. It's up to us to produce a design that's capable of orchestrating this juggling act.