Book Image

Learning Concurrency in Kotlin

By : Miguel Angel Castiblanco Torres
Book Image

Learning Concurrency in Kotlin

By: Miguel Angel Castiblanco Torres

Overview of this book

Kotlin is a modern and statically typed programming language with support for concurrency. Complete with detailed explanations of essential concepts, practical examples and self-assessment questions, Learning Concurrency in Kotlin addresses the unique challenges in design and implementation of concurrent code. This practical guide will help you to build distributed and scalable applications using Kotlin. Beginning with an introduction to Kotlin's coroutines, you’ll learn how to write concurrent code and understand the fundamental concepts needed to write multithreaded software in Kotlin. You'll explore how to communicate between and synchronize your threads and coroutines to write collaborative asynchronous applications. You'll also learn how to handle errors and exceptions, as well as how to work with a multicore processor to run several programs in parallel. In addition to this, you’ll delve into how coroutines work with each other. Finally, you’ll be able to build an Android application such as an RSS reader by putting your knowledge into practice. By the end of this book, you’ll have learned techniques and skills to write optimized code and multithread applications.
Table of Contents (11 chapters)

CPU-bound and I/O-bound

Bottlenecks are the most important thing to understand when it comes to optimizing the performance of applications, because they will indicate the points in which any type of throttling occurs. In this section we will talk about how concurrency and parallelism can affect the performance of an algorithm based on whether it's bound to the CPU or to I/O operations.

You aren't always going to need or even benefit from writing concurrent code. Understanding the bottlenecks of your code, how threads and coroutines work, and the differences between concurrency and parallelism is required to be able to make a correct assessment of when and how to implement concurrent software.

CPU-bound

Many algorithms are built around operations that need only the CPU to be completed. The performance of such algorithms will be delimited by the CPU in which they are running, and solely upgrading the CPU will usually improve their performance.

Let's think, for example, of a simple algorithm that takes a word and returns whether it's a palindrome or not:

fun isPalindrome(word: String) : Boolean {
val lcWord = word.toLowerCase()
return lcWord == lcWord.reversed()
}

Now, let's consider that this function is called from another function, filterPalindromes(), which takes a list of words and returns the ones that are palindromes:

fun filterPalindromes(words: List<String>) : List<String> {
return words.filter { isPalindrome(it) }
}

Finally, filterPalindromes() is called from the main method of the application where a list of words has been already defined:

val words = listOf("level", "pope", "needle", "Anna", "Pete", "noon", "stats")

fun main(args: Array<String>) {
filterPalindromes(words).forEach {
println(it)
}
}

In this example, all the parts of the execution depend on the CPU. If the code is updated to send hundreds of thousands of words, filterPalindromes() will take longer. Also, if the code is executed in a faster CPU, the performance will be improved without code changes.

I/O-bound

I/O-bound, on the other hand, are algorithms that rely on input/output devices, so their execution times depend on the speed of those devices, for example, an algorithm that reads a file and passes each word in the document to filterPalindromes() in order to print the palindromes in the document. Changing a few lines of the previous example will do:

fun main(args: Array<String>) {
val words = readWordsFromJson("resources/words.json")

filterPalindromes(words).forEach {
println(it)
}
}

The readWordsFromJson() function will read the file from the filesystem. This is an I/O operation that will depend on the speed at which the file can be read. If the file is stored in a hard drive, the performance of the application will be worse than if the file is stored in an SSD, for example.

Many other operations, such as networking or receiving input from the peripherals of the computer, are also I/O operations. Algorithms that are I/O-bound will have performance bottleneck based on the I/O operations, and this means that optimizations are dependent on external systems or devices.

Many I/O-bound, high-performance applications, such as databases, may end up being as fast as the access to the storage of the machine they are running on. Networking-based applications, like many phone apps, will perform based on the speed of their connectivity to the internet.

Concurrency versus parallelism in CPU-bound algorithms

CPU-bound algorithms will benefit from parallelism but probably not from single-core concurrency. For example, the previous algorithm that takes a list of words and filters the palindromes could be modified so that a thread is created per each 1,000 words received. So, if isPalindrome() receives 3,000 words, the execution could be represented like this:

isPalindrome creating a thread per each 1,000 words

Single-core execution

If this is executed in a single core, that core will then interleave between the three threads, each time filtering some of the words before switching to the next one. This interleaving process is called context switching.

Context switching adds overhead to the overall process, because it requires saving the state of the current thread and then loading the state of the next one. This overhead makes it likely that this multi-threaded implementation of isPalindrome() will take longer in a single-core machine when compared to the sequential implementation seen before. This happens because the sequential implementation will have one core do all the work but will avoid the context switch.

Parallel execution

If parallel execution is assumed, where each thread is executed in one dedicated core, then the execution of isPalindrome() could be around one third of that of the sequential implementation. Each core will filter its 1,000 words without interruption, reducing the total amount of time needed to complete the operation.

It's important to consider creating a reasonable amount of threads for CPU-bound algorithms, making this decision based on the amount of cores of the current device.. This can be leveraged by using Kotlin's CommonPool, which is a pool of threads created to run CPU-bound algorithms.

CommonPool's size is the amount of cores on the machine minus one. So, for example, it will be of size three in a machine with four cores.

Concurrency versus parallelism in I/O-bound algorithms

As seen before, I/O-bound algorithms are constantly waiting on something else. This constant waiting allows single-core devices to use the processor to do other useful tasks while waiting. So concurrent algorithms that are I/O-bound will perform similarly regardless of the execution happening in parallel or in a single core.

It is expected that I/O-bound algorithms will always perform better in concurrent implementations than if they are sequential. So it's recommended for I/O operations to be always executed concurrently. As mentioned before, in GUI applications it's particularly important to not block the UI thread.