Book Image

Hands-On High Performance with Go

By : Bob Strecansky
Book Image

Hands-On High Performance with Go

By: Bob Strecansky

Overview of this book

Go is an easy-to-write language that is popular among developers thanks to its features such as concurrency, portability, and ability to reduce complexity. This Golang book will teach you how to construct idiomatic Go code that is reusable and highly performant. Starting with an introduction to performance concepts, you’ll understand the ideology behind Go’s performance. You’ll then learn how to effectively implement Go data structures and algorithms along with exploring data manipulation and organization to write programs for scalable software. This book covers channels and goroutines for parallelism and concurrency to write high-performance code for distributed systems. As you advance, you’ll learn how to manage memory effectively. You’ll explore the compute unified device architecture (CUDA) application programming interface (API), use containers to build Go code, and work with the Go build cache for quicker compilation. You’ll also get to grips with profiling and tracing Go code for detecting bottlenecks in your system. Finally, you’ll evaluate clusters and job queues for performance optimization and monitor the application for performance regression. By the end of this Go programming book, you’ll be able to improve existing code and fulfill customer requirements by writing efficient programs.
Table of Contents (20 chapters)
1
Section 1: Learning about Performance in Go
7
Section 2: Applying Performance Concepts in Go
13
Section 3: Deploying, Monitoring, and Iterating on Go Programs with Performance in Mind

Understanding memory utilization

Once we have our initial binary, we start building on the knowledge that we have of the ELF format to continue our understanding of memory utilization. The text, data, and bss fields are a foundation on which the heap and stack are laid. The heap begins at the end of the .bss and .data bits and grows continuously to form larger memory addresses.

The stack is an allocation of contiguous blocks of memory. This allocation happens automatically within the function call stack. When a function is called, its variables get memory allocated on the stack. After the function call is completed, the variable's memory is deallocated. The stack has a fixed size and can only be determined at compile time. Stack allocation is inexpensive from an allocation perspective because it only needs to push to the stack and pull from the stack for allocation.

The heap...