Book Image

Effective Concurrency in Go

By : Burak Serdar
5 (1)
Book Image

Effective Concurrency in Go

5 (1)
By: Burak Serdar

Overview of this book

The Go language has been gaining momentum due to its treatment of concurrency as a core language feature, making concurrent programming more accessible than ever. However, concurrency is still an inherently difficult skill to master, since it requires the development of the right mindset to decompose problems into concurrent components correctly. This book will guide you in deepening your understanding of concurrency and show you how to make the most of its advantages. You’ll start by learning what guarantees are offered by the language when running concurrent programs. Through multiple examples, you will see how to use this information to develop concurrent algorithms that run without data races and complete successfully. You’ll also find out all you need to know about multiple common concurrency patterns, such as worker pools, asynchronous pipelines, fan-in/fan-out, scheduling periodic or future tasks, and error and panic handling in goroutines. The central theme of this book is to give you, the developer, an understanding of why concurrent programs behave the way they do, and how they can be used to build correct programs that work the same way in all platforms. By the time you finish the final chapter, you’ll be able to develop, analyze, and troubleshoot concurrent algorithms written in Go.
Table of Contents (13 chapters)

Why a memory model is necessary

In 1965, Gordon Moore observed that the number of transistors in dense integrated circuits double every year. Later, in 1975, this was adjusted to doubling every 2 years. Because of these advancements, it quickly became possible to squeeze lots of components into a tiny chip, enabling the building of faster processors.

Modern processors use many advanced techniques, such as caching, branch prediction, and pipelining, to utilize the circuitry on a CPU to its maximum potential. However, in the 2000s, hardware engineers started to hit the limit of what could be optimized on a single chip. As a result, they created chips containing multiple cores. Nowadays, most performance considerations are about how fast a single core can execute instructions, as well as how many cores can run those instructions simultaneously.

The compiler technology did not stand still while these improvements were happening. Modern compilers can aggressively optimize programs...