Book Image

Implementing Azure DevOps Solutions

By : Henry Been, Maik van der Gaag
Book Image

Implementing Azure DevOps Solutions

By: Henry Been, Maik van der Gaag

Overview of this book

Implementing Azure DevOps Solutions helps DevOps engineers and administrators to leverage Azure DevOps Services to master practices such as continuous integration and continuous delivery (CI/CD), containerization, and zero downtime deployments. This book starts with the basics of continuous integration, continuous delivery, and automated deployments. You will then learn how to apply configuration management and Infrastructure as Code (IaC) along with managing databases in DevOps scenarios. Next, you will delve into fitting security and compliance with DevOps. As you advance, you will explore how to instrument applications, and gather metrics to understand application usage and user behavior. The latter part of this book will help you implement a container build strategy and manage Azure Kubernetes Services. Lastly, you will understand how to create your own Azure DevOps organization, along with covering quick tips and tricks to confidently apply effective DevOps practices. By the end of this book, you’ll have gained the knowledge you need to ensure seamless application deployments and business continuity.
Table of Contents (21 chapters)
1
Section 1: Getting to Continuous Delivery
6
Section 2: Expanding your DevOps Pipeline
12
Section 3: Closing the Loop
15
Section 4: Advanced Topics

Exploring DevOps practices and habits

Since you are not the first team going on this journey, you can learn from the experiences of those before you. One example is the Microsoft team that built Azure DevOps. Being in the rare position that they can use their own product for developing their product, they have learned a great deal about what makes DevOps successful. From this, they have identified seven key DevOps practices and seven DevOps habits that many successful DevOps teams share:

DevOps practices

DevOps habits

Configuration management

Team autonomy and enterprise alignment

Release management

Rigorous management of technical debt

Continuous integration

Focus on flow of customer value

Continuous deployment

Hypothesis-driven development

Infrastructure as Code

Evidence gathered in production

Test automation

Live-site culture

Application performance monitoring

Manage infrastructure as a flexible resource

Now it is important to realize that just copying the motions described will not guarantee success. Just as with Agile, you will have to spend time to really understand these practices and habits, where they come from, and what they contribute to a continuous flow of value to your end users.

The following sections explore all of these practices and habits in more detail. Keep these in the back of your mind while reading the rest of this book. While the rest of this book will mostly focus on technical means of how to do things, do not forget that these are only means. The real value comes from mindset and creating a culture that is focused on creating a continuous flow of value to your customers.

DevOps practices

This section discusses all seven DevOps practices in turn. As you will quickly see, they are highly related and it is quite hard to practice one without the other. For example, test automation is highly related to continuous integration and continuous deployment.

In case you are planning to take the AZ-400 exam, mastering all of these practices and performing them using Azure DevOps will help you significantly.

Configuration management

Configuration management is about versioning the configuration of your application and the components it relies on, along with your application itself. Configuration is kept in source control and takes the form of, for example, JSON or YAML files that describe the desired configuration of your application. These files are the input for tools such as Ansible, Puppet, or PowerShell DSC that configure your environment and application. These tools are often invoked from a continuous deployment pipeline.

The desired state can also be reapplied at an interval, even if there are no changes made to the intended configuration. This way, it is ensured that the actual configuration stays correct and that manual changes are automatically revoked. We call this the prevention of configuration drift. Configuration drift occurs over time due to servers being added or removed over time, or manual, ad hoc interventions by administrators. Of course, this implies that intended updates to the configuration are done in source control and only applied using tools.

Configuration management or configuration as code is highly related to infrastructure as code. The two are often intertwined and on some platforms, the difference between the two might even feel artificial. Configuration as code will be discussed in detail in Chapter 6, Infrastructure and Configuration as Code.

Release management

Release management is about being in control of which version of your software is deployed to which environment. Versions are often created using continuous integration and delivery pipelines. These versions, along with all of the configuration needed, are then stored as immutable artifacts in a repository. From here on, release management tools are used to plan and control how these versions are deployed to one or more environments. Example controls are manual approvals and automated queries of open work and quality checks before allowing deployment to a new environment.

Release management is related to continuous deployment and focuses more on controlling the flow of versions through the continuous deployment pipeline. Chapter 6, Infrastructure and Configuration as Code, will cover configuration as code as part of release management.

Continuous integration

Continuous integration is a practice where every developer integrates their own work with that of the other developers in the team at least once a day and preferably more often. This means that every developer should push their work to the repository at least once a day and a continuous integration build verifies that their work compiles and that all unit tests run. It is important to understand that this verification should not run only on the code that the developer is working on in isolation. The real value comes when the work is also integrated with the work of others.

When integrating changes often and fast, problems with merging changes are less frequent and if they occur, are often less difficult to solve. In Chapter 2, Everything Starts with Source Control, you will learn more about how to set up your source control repositories to make this possible. In Chapter 3, Moving to Continuous Integration, you will learn about setting up a continuous integration build.

Continuous deployment

Continuous deployment is the practice of automatically deploying every new version of sufficient quality to production. When practicing continuous deployment, you have a fully automated pipeline that takes in every new version of your application (every commit), results in a new release, and starts deploying it to one or more environments. The first environment is often called test and the final environment will be production.

In this pipeline, there are multiple steps that verify the quality of the software, before letting it proceed to the next environment. If the quality is not sufficient, the release is aborted and will not propagate to the next environment. The premise behind this approach is that, in the pipeline, you try to prove that you cannot take the current version to the next environment. If you fail to prove so, you assume it is ready for further progression.

Only when a release has gone through all environments in the pipeline, it is deployed to production. Whenever a release cannot progress to the next environment, that release will be completely canceled. While you might be inclined to fix the reason for the failure and then restart deployment from the point where it failed, it is important not to do so. The changes you made at that point are after all not validated by all of the controls that the version has already passed through. The only way to validate the new version as a whole is by starting the pipeline from the start. You can see this clearly in the following diagram:

In Chapter 4, Continuous Deployment, you will learn about setting up continuous deployment using Azure DevOps Pipelines.

The preceding diagram can be found at https://en.wikipedia.org/wiki/Continuous_delivery#/media/File:Continuous_Delivery_process_diagram.svg. The image is by Grégoire Détrez, original by Jez Humble, under CC BY-SA 4.0, at https://creativecommons.org/licenses/by-sa/4.0/

Infrastructure as code

When writing an application, the binaries that you are building have to be running somewhere, on some application host. An example of such an application host can be a web server such as IIS or Apache. Next to an application host, we might need a database and some messaging solution. All of this together we call the infrastructure for our application. When practicing infrastructure as code, you are keeping a description of this infrastructure in your source code repository, alongside your application code.

When the time comes to release a new version of the application and you require one or more changes in the infrastructure, you are executing this description of your desired infrastructure using tools such as Chef, Puppet, PowerShell DSC, or Azure ARM templates. The execution of such a description is idempotent, which means that it can be executed more than once and the end result is the same. This is because your description of the infrastructure describes the desired state you want the infrastructure to be in and not a series of steps to be executed. Those steps to be executed, if there are any, are automatically determined by your tool of choice. Applying the desired state can also be done automatically in a continuous deployment pipeline and is often executed before updating the application code.

The big advantage of this is that you can now easily create a new environment, where the infrastructure is guaranteed to be the same as in your other environments. Also, the problem of configuration drift, where the infrastructure between your different environment slowly diverges, is no longer possible since every time, you apply the desired state again to every environment and they are forced.

Chapter 6, Infrastructure and Configuration as Code, of this book will discuss infrastructure as code in more detail.

Test automation

To continuously deliver value to your end users, you will have to release fast and often. This has implications for the way you test your application. You can no longer execute manual tests when you release your application every few minutes. This means that you have to automate as many of your tests as possible.

You will most likely want to create multiple test suites for your applications that you run at different stages of your delivery pipeline. Fast unit tests that run within a few minutes and that are executed whenever a new pull request is opened should give your team very quick feedback on the quality of their work and should catch most of the errors. Next, the team should run one or more slower test suites later in the pipeline to further increase your confidence in the quality of a version of your application.

All of this should limit the amount of manual testing to a bare minimum and allow you to automatically deploy new versions of your application with confidence.

Chapter 8, Continuous Testing, of this book will cover test automation in detail.

Application performance monitoring

This last practice is about learning all about how your application is doing in production. Gathering metrics such as response times and the number of requests will tell you about how the systems are performing. Capturing errors is also part of performance monitoring and allows you to start fixing problems without having to wait on your customers to contact you about them.

In addition to that, you can gather information on which parts of the application are more or less frequently used and whether new features are being picked up by users. Learning about usage patterns provides you with great insights into how customers really use your applications and common scenarios they are going through.

Chapter 9, Security and Compliance, and Chapter 10, Application Monitoring, will go into detail on learning about both your application and your users' behavior in production.

DevOps habits

The seven habits of successful DevOps teams are more concerned with culture and your attitude while developing and delivering software and less with technical means than DevOps practices are. Still, it is important to know and understand these habits since they will help to make DevOps adoption easier.

You will notice that developing these habits will reinforce the use of the practices enumerated previously and the tools you use to implement them. And of course, this holds the other way around as well.

Team autonomy and enterprise alignment

An important part of working Agile is creating teams that are largely self-directed and can make decisions without (too many) dependencies outside the team. Such a team will hence often include multiple roles, including a product owner that owns one or more features and is empowered to decide on the way forward with those.

However, this autonomy also comes with the responsibility to align the work of the team with the direction the whole product is taking. It is important to develop ways of aligning the work of tens or hundreds of teams with each other, in such a way that everyone can sail their own course, but the fleet as a whole stays together as well.

The best-case scenario is that teams take it upon themselves to align to the larger vision, instead of taking directions every now and then.

Rigorous management of technical debt

Another habit is that of rigorous management of technical debt. The term debt in itself suggests that there is a cost (interest) associated with the delay of addressing an issue. To keep moving at a constant pace and not slowly lose speed over time, it is crucial to keep the number of bugs or architectural issues to a minimum and only tolerate so much. Within some teams this is even formalized in agreements. For example, a team can agree that the number of unfixed bugs should never exceed the number of team members. This means, that if a team has four members and a ninth bug is reported that no new work will be undertaken until at least one bug should be fixed.

Focusing on flow of customer value

It is important to accept that users receive no value from code that has been written until they are actually using it. Focusing on the flow of value to a user means that code has to be written, tested, and delivered and should be running in production before you are done. Focusing on this habit can really drive cooperation between disciplines and teams.

Hypothesis-driven development

In many modern development methodologies, there is a product owner who is responsible for ordering all of the work in the backlog, based on the business value. This owner, as the expert, is responsible for maximizing the value delivered by the development team by ordering all items based on business value (divided by effort).

However, recent research has shown that, even though the product owner is an expert, they cannot correctly predict which features will bring the most value to users. Roughly one third of the work from a team actually adds value for users, and even worse while, another third actually decreases value. For this reason, you can switch your backlog from features or user stories to the hypothesis you want to prove or disprove. You create only a minimal implementation or even just a hint of a feature in the product and then measure whether it is picked up by users. Only when this happens do you expand the implementation of the feature.

Evidence gathered in production

Performance measurements should be taken in your production environment, not (just) in an artificial load test environment. There is nothing wrong with executing load tests before going to production if they deliver value to you. However, the real performance is done in the production environment. And it should be measured there and compared with previous measurements.

This holds also for usage statistics, patterns, and many, many other performance indicators. They can all be automatically gathered using production metrics.

Live-site culture

A live-site culture promotes the idea that anything that happens in the production environment takes precedence over anything else. Next, anything that threatens production, is about to go to production, or hinders going to production at any time gets priority. Only when these are all in order is the attention shifted to future work.

Also, a part of a live-site culture is ensuring that anything that disturbed the operation of the service is thoroughly analyzed—not to find out who to blame or fire but to find out how to prevent this from happening again. Prevention is preferably done by shifting left, for example, detecting an indicator of a repeat incident earlier in the pipeline.

Managing infrastructure as a flexible resource

Finally, a successful DevOps team treats its servers and infrastructure as cattle, not as pets. This means that infrastructure is spun up when needed and disregarded as soon as it is not needed anymore. The ability to do this is fueled by configuration and infrastructure as code. This might even go so far as creating a new production environment for every new deployment and just deleting the old production environment after switching all traffic from the old environment to the new one.

Besides keeping these DevOps practices and habits in mind, there are certain stages that you will go through while trying to move to a DevOps culture in your organization. The next section will take you through it.