Book Image

Modern DevOps Practices

By : Gaurav Agarwal
Book Image

Modern DevOps Practices

By: Gaurav Agarwal

Overview of this book

Containers have entirely changed how developers and end-users see applications as a whole. With this book, you'll learn all about containers, their architecture and benefits, and how to implement them within your development lifecycle. You'll discover how you can transition from the traditional world of virtual machines and adopt modern ways of using DevOps to ship a package of software continuously. Starting with a quick refresher on the core concepts of containers, you'll move on to study the architectural concepts to implement modern ways of application development. You'll cover topics around Docker, Kubernetes, Ansible, Terraform, Packer, and other similar tools that will help you to build a base. As you advance, the book covers the core elements of cloud integration (AWS ECS, GKE, and other CaaS services), continuous integration, and continuous delivery (GitHub actions, Jenkins, and Spinnaker) to help you understand the essence of container management and delivery. The later sections of the book will take you through container pipeline security and GitOps (Flux CD and Terraform). By the end of this DevOps book, you'll have learned best practices for automating your development lifecycle and making the most of containers, infrastructure automation, and CaaS, and be ready to develop applications using modern tools and techniques.
Table of Contents (19 chapters)
1
Section 1: Container Fundamentals and Best Practices
7
Section 2: Delivering Containers
15
Section 3: Modern DevOps with GitOps

Open source CaaS with Knative

As we've seen, there are several vendor-specific CaaS services available on the market. Still, the problem with most of them is that they are tied up to a single cloud provider. Our container deployment specification then becomes vendor-specific and results in vendor lock-in. As modern DevOps engineers, we also have to ensure that the solution we propose best fits the architecture's needs, and avoiding vendor lock-in is one of the most important ones.

However, Kubernetes in itself is not serverless. You have to have infrastructure defined, and daemon services should have at least a single instance running at a particular time. This makes managing microservices applications a pain and resource-intensive.

But wait! We said that microservices help optimize infrastructure consumption. Yes, that's correct, they do, but they do so within the container space. Imagine that you have a shared cluster of VMs where parts of the application scale...