Book Image

Strategizing Continuous Delivery in the Cloud

By : Garima Bajpai, Thomas Schuetz
Book Image

Strategizing Continuous Delivery in the Cloud

By: Garima Bajpai, Thomas Schuetz

Overview of this book

Many organizations are embracing cloud technology to remain competitive, but implementing and adopting development processes while modernizing a cloud-based ecosystem can be challenging. Strategizing Continuous Delivery in Cloud helps you modernize continuous delivery and achieve infrastructure-application convergence in the cloud. You’ll learn the differences between cloud-based and traditional delivery approaches and develop a tailored strategy. You’ll discover how to secure your cloud delivery environment, ensure software security, run different test types, and test in the pre-production and production stages. You’ll also get to grips with the prerequisites for onboarding cloud-based continuous delivery for organizational and technical aspects. Then, you’ll explore key aspects of readiness to overcome core challenges in your cloud journey, including GitOps, progressive delivery controllers, feature flagging, differences between cloud-based and traditional tools, and implementing cloud chaos engineering. By the end of this book, you’ll be well-equipped to select the right cloud environment and technologies for CD and be able to explore techniques for implementing CD in the cloud.
Table of Contents (18 chapters)
1
Part 1: Foundation and Preparation for Continuous Delivery in the Cloud
6
Part 2: Implementing Continuous Delivery
11
Part 3: Best Practices and the Way Ahead

The role of velocity in CD and associated risks

Deployments are events on a system that can have a huge impact on it. In traditional infrastructures, this thesis led to the assumption that it’s safer to deploy in larger batches and less often. Such deployments were called Big Bang deployments and were often done in the middle of the night. This was done to reduce the risk of failure and to have a higher chance of success. Let’s inspect this a bit:

  • Larger batches: If the changeset of software gets larger, the risk of failure and misbehavior increases, and our options when a change has a negative impact on customers get fewer. When a problem in a larger batch occurs, we might have a long changelog to investigate, and it might get very hard to find the root cause of the problem. Furthermore, it might be hard for the developers to recapitulate how they implemented a feature, as they may have implemented it a long time ago. Last, but not least, it might become difficult...