Book Image

Hands-On Software Engineering with Golang

By : Achilleas Anagnostopoulos
Book Image

Hands-On Software Engineering with Golang

By: Achilleas Anagnostopoulos

Overview of this book

Over the last few years, Go has become one of the favorite languages for building scalable and distributed systems. Its opinionated design and built-in concurrency features make it easy for engineers to author code that efficiently utilizes all available CPU cores. This Golang book distills industry best practices for writing lean Go code that is easy to test and maintain, and helps you to explore its practical implementation by creating a multi-tier application called Links ‘R’ Us from scratch. You’ll be guided through all the steps involved in designing, implementing, testing, deploying, and scaling an application. Starting with a monolithic architecture, you’ll iteratively transform the project into a service-oriented architecture (SOA) that supports the efficient out-of-core processing of large link graphs. You’ll learn about various cutting-edge and advanced software engineering techniques such as building extensible data processing pipelines, designing APIs using gRPC, and running distributed graph processing algorithms at scale. Finally, you’ll learn how to compile and package your Go services using Docker and automate their deployment to a Kubernetes cluster. By the end of this book, you’ll know how to think like a professional software developer or engineer and write lean and efficient Go code.
Table of Contents (21 chapters)
1
Section 1: Software Engineering and the Software Development Life Cycle
3
Section 2: Best Practices for Maintainable and Testable Go Code
7
Section 3: Designing and Building a Multi-Tier System from Scratch
14
Section 4: Scaling Out to Handle a Growing Number of Users
18
Epilogue

Building a crawler pipeline for the Links 'R' Us project

In the following sections, we will be putting the generic pipeline package that we built to the test by using it to construct the crawler pipeline for the Links 'R' Us project!

Following the single-responsibility principle, we will break down the crawl task into a sequence of smaller subtasks and assemble the pipeline illustrated in the following figure. The decomposition into smaller subtasks also comes with the benefit that each stage processor can be tested in total isolation without the need to create a pipeline instance:

Figure 2: The stages of the crawler pipeline that we will be constructing

The full code for the crawler and its tests can be found in the Chapter07/crawler package, which you can find at the book's GitHub repository.

...