Book Image

Argo CD in Practice

By : Liviu Costea, Spiros Economakis
Book Image

Argo CD in Practice

By: Liviu Costea, Spiros Economakis

Overview of this book

GitOps follows the practices of infrastructure as code (IaC), allowing developers to use their day-to-day tools and practices such as source control and pull requests to manage apps. With this book, you’ll understand how to apply GitOps bootstrap clusters in a repeatable manner, build CD pipelines for cloud-native apps running on Kubernetes, and minimize the failure of deployments. You’ll start by installing Argo CD in a cluster, setting up user access using single sign-on, performing declarative configuration changes, and enabling observability and disaster recovery. Once you have a production-ready setup of Argo CD, you’ll explore how CD pipelines can be built using the pull method, how that increases security, and how the reconciliation process occurs when multi-cluster scenarios are involved. Next, you’ll go through the common troubleshooting scenarios, from installation to day-to-day operations, and learn how performance can be improved. Later, you’ll explore the tools that can be used to parse the YAML you write for deploying apps. You can then check if it is valid for new versions of Kubernetes, verify if it has any security or compliance misconfigurations, and that it follows the best practices for cloud-native apps running on Kubernetes. By the end of this book, you’ll be able to build a real-world CD pipeline using Argo CD.
Table of Contents (15 chapters)
1
Part 1: The Fundamentals of GitOps and Argo CD
4
Part 2: Argo CD as a Site Reliability Engineer
7
Part 3: Argo CD in Production

What is Argo CD

Over the years, most of us are using the same separation in terms of the environments of an application, which are divided into development, test, staging, and production. Representing these in a Kubernetes differs in many ways and depends on many other factors like size of the team and budgeting. One of these could be different clusters per environment or another one a separation within one cluster by namespaces. In any of these, we usually create a new namespace for the application with the necessary deployment resources and add whatever is needed to configure our application for an environment (config maps, secrets, ingress, and so on)

The drawback with the approach we described above is that there will be configuration drift over time. For example, our development cluster or namespace will have the latest development version of the application or changes in Kubernetes resources like network policies, but we will need to manually apply over all the changes to the rest...