Book Image

Effective Concurrency in Go

By : Burak Serdar
Book Image

Effective Concurrency in Go

By: Burak Serdar

Overview of this book

The Go language has been gaining momentum due to its treatment of concurrency as a core language feature, making concurrent programming more accessible than ever. However, concurrency is still an inherently difficult skill to master, since it requires the development of the right mindset to decompose problems into concurrent components correctly. This book will guide you in deepening your understanding of concurrency and show you how to make the most of its advantages. You’ll start by learning what guarantees are offered by the language when running concurrent programs. Through multiple examples, you will see how to use this information to develop concurrent algorithms that run without data races and complete successfully. You’ll also find out all you need to know about multiple common concurrency patterns, such as worker pools, asynchronous pipelines, fan-in/fan-out, scheduling periodic or future tasks, and error and panic handling in goroutines. The central theme of this book is to give you, the developer, an understanding of why concurrent programs behave the way they do, and how they can be used to build correct programs that work the same way in all platforms. By the time you finish the final chapter, you’ll be able to develop, analyze, and troubleshoot concurrent algorithms written in Go.
Table of Contents (13 chapters)

Memory guarantees

Why do we need separate functions for atomic memory operations? If we write to a variable whose size is less or equal to the machine word size (which is what the int type is defined to be), such as a=1, wouldn’t that be atomic? The Go memory model actually guarantees that the write operation will be atomic; however, it does not guarantee when other goroutines will see the effects of that write operation, if ever. Let’s try to dissect what this statement means. The first part simply says that if you write to a shared memory location that is the same size as a machine word (i.e., int) from one goroutine and read it from another, you will not observe some random value even if there is a race. The memory model guarantees that you will only observe the value before the write operation, or the value after it (this is not true for all languages.) This also means that if the write operation is larger than the machine word size, then a goroutine reading this...