Book Image

Hands-On High Performance with Go

By : Bob Strecansky
Book Image

Hands-On High Performance with Go

By: Bob Strecansky

Overview of this book

Go is an easy-to-write language that is popular among developers thanks to its features such as concurrency, portability, and ability to reduce complexity. This Golang book will teach you how to construct idiomatic Go code that is reusable and highly performant. Starting with an introduction to performance concepts, you’ll understand the ideology behind Go’s performance. You’ll then learn how to effectively implement Go data structures and algorithms along with exploring data manipulation and organization to write programs for scalable software. This book covers channels and goroutines for parallelism and concurrency to write high-performance code for distributed systems. As you advance, you’ll learn how to manage memory effectively. You’ll explore the compute unified device architecture (CUDA) application programming interface (API), use containers to build Go code, and work with the Go build cache for quicker compilation. You’ll also get to grips with profiling and tracing Go code for detecting bottlenecks in your system. Finally, you’ll evaluate clusters and job queues for performance optimization and monitor the application for performance regression. By the end of this Go programming book, you’ll be able to improve existing code and fulfill customer requirements by writing efficient programs.
Table of Contents (20 chapters)
1
Section 1: Learning about Performance in Go
7
Section 2: Applying Performance Concepts in Go
13
Section 3: Deploying, Monitoring, and Iterating on Go Programs with Performance in Mind

Introducing semaphores

Semaphores are another method for controlling how goroutines execute parallel tasks. Semaphores are convenient because they give us the ability to use a worker pool pattern, but we don't need to shut down workers after the work has been completed and the workers are idle. The idea of having a weighted semaphore in the Go language is relatively new; the sync package implementation of semaphores was implemented in early 2017, so it is one of the newest parallel task constructs.

If we take the example of a simple loop in the following code block, add 100 ms of latency to a request, and add an item to an array, we can quickly see that the amount of time it takes increases as these tasks are operating in a series:

package main

import (
"fmt"
"time"
)

func main() {
var out = make([]string, 5) ...