Book Image

Mastering Concurrency Programming with Java 9, Second Edition - Second Edition

By : Javier Fernández González
Book Image

Mastering Concurrency Programming with Java 9, Second Edition - Second Edition

By: Javier Fernández González

Overview of this book

Concurrency programming allows several large tasks to be divided into smaller sub-tasks, which are further processed as individual tasks that run in parallel. Java 9 includes a comprehensive API with lots of ready-to-use components for easily implementing powerful concurrency applications, but with high flexibility so you can adapt these components to your needs. The book starts with a full description of the design principles of concurrent applications and explains how to parallelize a sequential algorithm. You will then be introduced to Threads and Runnables, which are an integral part of Java 9's concurrency API. You will see how to use all the components of the Java concurrency API, from the basics to the most advanced techniques, and will implement them in powerful real-world concurrency applications. The book ends with a detailed description of the tools and techniques you can use to test a concurrent Java application, along with a brief insight into other concurrency mechanisms in JVM.
Table of Contents (14 chapters)

Concurrency design patterns

In software engineering, a design pattern is a solution to a common problem. This solution has been used many times, and it has proved to be an optimal solution to the problem. You can use them to avoid 'reinventing the wheel' every time you have to solve one of these problems. Singleton or Factory are examples of common design patterns used in almost every application.

Concurrency also has its own design patterns. In this section, we describe some of the most useful concurrency design patterns and their implementation in the Java language.

Signaling

This design pattern explains how to implement the situation where a task has to notify an event to another task. The easiest way to implement this pattern is with a semaphore or a mutex, using the ReentrantLock or Semaphore classes of the Java language or even the wait() and notify() methods included in the Object class.

See the following example:

public void task1() { 
  section1(); 
  commonObject.notify(); 
} 
 
public void task2() { 
  commonObject.wait(); 
  section2(); 
} 

Under these circumstances, the section2() method will always be executed after the section1() method.

Rendezvous

This design pattern is a generalization of the Signaling pattern. In this case, the first task waits for an event of the second task and the second task waits for an event of the first task. The solution is similar to that of Signaling, but in this case, you must use two objects instead of one.

See the following example:

public void task1() { 
  section1_1(); 
  commonObject1.notify(); 
  commonObject2.wait(); 
  section1_2(); 
} 
public void task2() { 
  section2_1(); 
  commonObject2.notify(); 
  commonObject1.wait(); 
  section2_2(); 
} 

Under these circumstances, section2_2() will always be executed after section1_1() and section1_2() after section2_1(). Take into account that if you put the call to the wait() method before the call to the notify() method, you will have a deadlock.

Mutex

A mutex is a mechanism that you can use to implement a critical section, ensuring the mutual exclusion. That is to say, only one task can execute the portion of code protected by the mutex at once. In Java, you can implement a critical section using the synchronized keyword (that allows you to protect a portion of code or a full method), the ReentrantLock class, or the Semaphore class.

Look at the following example:

public void task() { 
  preCriticalSection(); 
  try { 
    lockObject.lock() // The critical section begins 
    criticalSection(); 
  } catch (Exception e) { 
 
  } finally { 
    lockObject.unlock(); // The critical section ends 
     postCriticalSection(); 
}

Multiplex

The Multiplex design pattern is a generalization of the Mutex. In this case, a determined number of tasks can execute the critical section at once. It is useful, for example, when you have multiple copies of a resource. The easiest way to implement this design pattern in Java is using the Semaphore class initialized to the number of tasks that can execute the critical section at once.

Look at the following example:

public void task() { 
  preCriticalSection(); 
  semaphoreObject.acquire(); 
  criticalSection(); 
  semaphoreObject.release(); 
  postCriticalSection(); 
} 

Barrier

This design pattern explains how to implement the situation where you need to synchronize some tasks at a common point. None of the tasks can continue with their execution until all the tasks have arrived at the synchronization point. Java Concurrency API provides the CyclicBarrier class, which is an implementation of this design pattern.

Look at the following example:

public void task() { 
  preSyncPoint(); 
  barrierObject.await(); 
  postSyncPoint(); 
} 

Double-checked locking

This design pattern provides a solution to the problem that occurs when you acquire a lock and then check for a condition. If the condition is false, you have the overhead of acquiring the lock ideally. An example of this situation is the lazy initialization of objects. If you have a class implementing the Singleton design pattern, you may have some code like this:

public class Singleton{ 
  private static Singleton reference; 
  private static final Lock lock=new ReentrantLock(); 
  public static Singleton getReference() { 
    try { 
      lock.lock(); 
      if (reference==null) { 
        reference=new Object(); 
      } 
    } catch (Exception e) { 
        System.out.println(e); 
    } finally { 
        lock.unlock(); 
    } 
    return reference; 
  } 
} 

A possible solution could be to include the lock inside the conditions:

public class Singleton{ 
  private Object reference; 
  private Lock lock=new ReentrantLock(); 
  public Object getReference() { 
    if (reference==null) { 
      lock.lock(); 
      if (reference == null) { 
        reference=new Object(); 
      } 
      lock.unlock(); 
    } 
    return reference; 
  } 
} 

This solution still has problems. If two tasks check the condition at once, you will create two objects. The best solution to this problem doesn't use any explicit synchronization mechanisms:

public class Singleton { 
 
  private static class LazySingleton { 
    private static final Singleton INSTANCE = new Singleton(); 
  } 
 
  public static Singleton getSingleton() { 
    return LazySingleton.INSTANCE; 
  } 
 
}

Read-write lock

When you protect access to a shared variable with a lock, only one task can access that variable, independently of the operation you are going to perform on it. Sometimes, you will have variables that you modify a few times but you read many times. In this circumstance, a lock provides poor performance because all the read operations can be made concurrently without any problem. To solve this problem, we can use the read-write lock design pattern. This pattern defines a special kind of lock with two internal locks: one for read operations and another for write operations. The behavior of this lock is as follows:

  • If one task is doing a read operation and another task wants to do another read operation, it can do it
  • If one task is doing a read operation and another task wants to do a write operation, it's blocked until all the readers finish
  • If one task is doing a write operation and another task wants to do an operation (read or write), it's blocked until the writer finishes

The Java Concurrency API includes the class ReentrantReadWriteLock that implements this design pattern. If you want to implement this pattern from scratch, you have to be very careful with the priority between read-tasks and write-tasks. If too many read-tasks exist, write-tasks can be waiting too long.

Thread pool

This design pattern tries to remove the overhead introduced by creating a thread per task you want to execute. It's formed by a set of threads and a queue of tasks you want to execute. The set of threads usually has a fixed size. When a thread finishes the execution of a task, it doesn't finish its execution. It looks for another task in the queue. If there is another task, it executes it. If not, the thread waits until a task is inserted in the queue, but it's not destroyed.

The Java Concurrency API includes some classes that implement the ExecutorService interface that internally uses a pool of threads.

Thread local storage

This design pattern defines how to use global or static variables locally to tasks. When you have a static attribute in a class, all the objects of a class access the same occurrences of the attribute. If you use thread local storage, each thread accesses a different instance of the variable.

The Java Concurrency API includes the ThreadLocal class to implement this design pattern.