Book Image

Cloud Native programming with Golang

By : Mina Andrawos, Martin Helmich
Book Image

Cloud Native programming with Golang

By: Mina Andrawos, Martin Helmich

Overview of this book

Awarded as one of the best books of all time by BookAuthority, Cloud Native Programming with Golang will take you on a journey into the world of microservices and cloud computing with the help of Go. Cloud computing and microservices are two very important concepts in modern software architecture. They represent key skills that ambitious software engineers need to acquire in order to design and build software applications capable of performing and scaling. Go is a modern cross-platform programming language that is very powerful yet simple; it is an excellent choice for microservices and cloud applications. Go is gaining more and more popularity, and becoming a very attractive skill. This book starts by covering the software architectural patterns of cloud applications, as well as practical concepts regarding how to scale, distribute, and deploy those applications. You will also learn how to build a JavaScript-based front-end for your application, using TypeScript and React. From there, we dive into commercial cloud offerings by covering AWS. Finally, we conclude our book by providing some overviews of other concepts and technologies that you can explore, to move from where the book leaves off.
Table of Contents (19 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
7
AWS I – Fundamentals, AWS SDK for Go, and EC2

Cloud application architecture patterns


Usually, developing applications that run in a cloud environment is not that different from regular application development. However, there are a few architectural patterns that are particularly common when targeting a cloud environment, which you will learn in the following section.

The twelve-factor app

The twelve-factor app methodology is a set of rules for building scalable and resilient cloud applications. It was published by Heroku, one of the dominant PaaS providers. However, it can be applied to all kinds of cloud applications, independent of concrete infrastructure or platform providers. It is also independent of programming languages and persistence services and can equally be applied to Go programming and, for example, Node.js programming. The twelve-factor app methodology describes (unsurprisingly) twelve factors that you should consider in your application for it to be easily scalable, resilient, and platform independent. You can read up on the full description on each factor on https://12factor.net. For the purpose of this book, we will highlight some factors that we deem especially important:

  • Factor II: Dependencies—Explicitly declare and isolate dependencies: This factor deserves special mention because it is actually not as important in Go programming as in other languages. Typically, a cloud application should never rely on any required library or external tool being already present on a system. Dependencies should be explicitly declared (for example, using an npm package.json file for a Node.js application) so that a package manager can pull all these dependencies when deploying a new instance of the application. In Go, an application is typically deployed as a statically compiled binary that already contains all required libraries. However, even a Go application can be dependent on external system tools (for example, it can fork out to tools such as ImageMagick) or on existing C libraries. Ideally, you should deploy tools like these alongside your application. This is where container engines, such as Docker, shine.
  • Factor III: Config—Store config in the environment: Configuration is any kind of data that might vary for different deployment, for example, connection data and credentials for external services and databases. These kinds of data should be passed to the application via environment variables. In a Go application, retrieving these is then as easy as callingos.Getenv ("VARIABLE_NAME"). In more complex cases (for example, when you have many configuration variables), you can also resort to libraries such as github.com/tomazk/envcfg orgithub.com/caarlos0/env. For heavy lifting, you can use thegithub.com/spf13/viper library.
  • Factor IV: Backing Services—Treat backing services as attached resources: Ensure that services that your app depends on (such as databases, messaging systems, or external APIs) are easily swappable by configuration. For example, your app could accept an environment variable, such as DATABASE_URL, that might containmysql://root:root@localhost/test for a local development deployment andmysql://root:[email protected] in your production setup.
  • Factor VI: Processes—Execute the app as one or more stateless processes: Running application instances should be stateless; any kind of data that should persist beyond a single request/transaction needs to be stored in an external persistence service.One important case to keep in mind is user sessions in web applications. Often, user session data is stored in the process's memory (or is persisted to the local filesystem) in the expectancy that subsequent requests of the same user will be served by the same instance of your application. Instead, try to keep user sessions stateless or move the session state into an external data store, such as Redis or Memcached.
  • Factor IX: Disposability—Maximize robustness with fast startup and graceful shutdown: In a cloud environment, sudden termination (both intentional, for example, in case of downscaling, and unintentional, in case of failures) needs to be expected. A twelve-factor app should have fast startup times (typically in the range of a few seconds), allowing it to rapidly deploy new instances. Besides, fast startup and graceful termination is another requirement. When a server shut down, the operating system will typically tell your application to shut down by sending a SIGTERM signal that the application can catch and react to accordingly (for example, by stopping to listen on the service port, finishing requests that are currently being processed, and then exiting).
  • Factor XI: Logs—Treat logs as event streams: Log data is often useful for debugging and monitoring your application's behavior. However, a twelve-factor app should not concern itself with the routing or storage of its own log data. The easiest and simplest solution is to simply write your log stream to the process's standard output stream (for example, just using fmt.Println(...)). Streaming events to stdout allows a developer to simply watch the event stream on their console when developing the application. In production setups, you can configure the execution environment to catch the process output and send the log stream to a place where it can be processed (the possibilities here are endless—you could store them in your server's journald, send them to a syslog server, store your logs in an ELK setup, or send them to an external cloud service).

What are microservices?

When an application is maintained by many different developers over a longer period of time, it tends to get more and more complex. Bug fixes, new or changing requirements, and constant technological changes result in your software continually growing and changing. When left unchecked, this software evolution will lead to your application getting more complex and increasingly difficult to maintain.

Preventing this kind of software erosion is the objective of the microservice architecture paradigm that has emerged over the past few years. In a microservice architecture, a software system is split into a set of (potentially a lot of) independent and isolated services. These run as separate processes and communicate using network protocols (of course, each of these services should in itself be a twelve-factor app). For a more thorough introduction to the topic, we can recommend the original article on the microservice architecture by Lewis and Fowler at https://martinfowler.com/articles/microservices.html.

In contrast to traditional Service-Oriented Architectures (SOA), which have been around for quite a while, microservice architectures focus on simplicity. Complex infrastructure components such as ESBs are avoided at all costs, and instead of complicated communication protocols such as SOAP, simpler means of communication such as REST web services (about which you will learn more in Chapter 2, Building Microservices Using Rest APIs) or AMQP messaging (refer to Chapter 4, Asynchronous Microservice Architectures Using Message Queues) are preferred.

Splitting complex software into separate components has several benefits. For instance, different services can be built on different technology stacks. For one service, using Go as runtime and MongoDB as persistence layer may be the optimal choice, whereas a Node.js runtime with a MySQL persistence might be a better choice for other components. Encapsulating functionality in separate services allows developer teams to choose the right tool for the right job. Other advantages of microservices on an organizational level are that each microservice can be owned by different teams within an organization. Each team can develop, deploy, and operate their services independently, allowing them to adjust their software in a very flexible way.

Deploying microservices

With their focus on statelessness and horizontal scaling, microservices work well with modern cloud environments. Nevertheless, when choosing a microservice architecture, deploying your application will tend to get more complex overall, as you will need to deploy more, different applications (all the more reason to stick with the twelve-factor app methodology).

However, each individual service will be easier to deploy than a big monolithic application. Depending on the service's size, it will also be easier to upgrade a service to a new runtime or to replace it with a new implementation entirely. Also, you can scale each microservice individually. This allows you to scale out heavily used parts of your application while keeping less utilized components cost-efficient. Of course, this requires each service to support horizontal scaling.

Deploying microservices gets (potentially) more complex when different services use different technologies. A possible solution for this problem is offered by modern container runtimes such as Docker or RKT. Using containers, you can package an application with all its dependencies into a container image and then use that image to quickly spawn a container running your application on any server that can run Docker (or RKT) containers. (Let's return to the twelve-factor app—deploying applications in containers is one of the most thorough interpretations of dependency isolation as prescribed by Factor II.)

Running container workloads is a service offered by many major cloud providers (such as AWS'Elastic Container Service, the Azure Container Service, or the Google Container Engine). Apart from that, there are also container orchestration engines such as Docker Swarm, Kubernetesor Apache Mesosthat you can roll out on IaaS cloud platforms or your own hardware. These orchestration engines offer the possibility to distribute container workloads over entire server clusters, and offer a very high degree of automation. For example, the cluster manager will take care of deploying containers across any number of servers, automatically distributing them according to their resource requirements and usages. Many orchestration engines also offer auto-scaling features and are often tightly integrated with cloud environments.

You will learn more about deploying microservices with Docker and Kubernetes in Chapter 6, Deploying Your Application in Containers.

REST web services and asynchronous messaging

When building a microservice architecture, your individual services need to communicate with one another. One widely accepted de facto standard for microservice communication is RESTful web services (about which you will learn more in Chapter 2, Building Microservices Using Rest APIs, and Chapter 3, Securing Microservices). These are usually built on top of HTTP (although the REST architectural style itself is more or less protocol independent) and follow the client/server model with a request/reply communication model.

Synchronous versus Asynchronous communication model

This architecture is typically easy to implement and to maintain. It works well for many use cases. However, the synchronous request/reply pattern may hit its limits when you are implementing a system with complex processes that span many services. Consider the first part of the preceding diagram. Here, we have a user service that manages an application's user database. Whenever a new user is created, we will need to make sure that other services in the system are also made aware of this new user. Using RESTful HTTP, the user service needs to notify these other services by REST calls. This means that the user service needs to know all other services that are in some way affected by the user management domain. This leads to a tight coupling between the components, which is something you'd generally like to avoid.

An alternative communication pattern that can solve these issues is the publish/subscribe pattern. Here, services emit events that other services can listen on. The service emitting the event does not need to know which other services are actually listening to these events. Again, consider the second part of the preceding diagram—here, the user service publishes an event stating that a new user has just been created. Other services can now subscribe to this event and are notified whenever a new user has been created. These architectures usually require the use of a special infrastructure component: the message broker. This component accepts published messages and routes them to their subscribers (typically using a queue as intermediate storage).

The publish/subscribe pattern is a very good method to decouple services from one another—when a service publishes events, it does not need to concern itself with where they will go, and when another service subscribes to events, it also does not know where they came from. Furthermore, asynchronous architectures tend to scale better than ones with synchronous communication. Horizontal scaling and load balancing are easily accomplished by distributing messages to multiple subscribers.

Unfortunately, there is no such thing as a free lunch; this flexibility and scalability are paid for with additional complexity. Also, it becomes hard to debug single transactions across multiple services. Whether this trade-off is acceptable for you needs to be assessed on a case-by-case basis.

In Chapter 4, Asynchronous Microservice Architectures Using Message Queues, you will learn more about asynchronous communication patterns and message brokers.