Book Image

DevOps with Windows Server 2016

Book Image

DevOps with Windows Server 2016

Overview of this book

Delivering applications swiftly is one of the major challenges faced in fast-paced business environments. Windows Server 2016 DevOps is the solution to these challenges as it helps organizations to respond faster in order to handle the competitive pressures by replacing error-prone manual tasks using automation. This book is a practical description and implementation of DevOps principles and practices using the features provided by Windows Server 2016 and VSTS vNext. It jumps straight into explaining the relevant tools and technologies needed to implement DevOps principles and practices. It implements all major DevOps practices and principles and takes readers through it from envisioning a project up to operations and further. It uses the latest and upcoming concepts and technologies from Microsoft and open source such as Docker, Windows Container, Nano Server, DSC, Pester, and VSTS vNext. By the end of this book, you will be well aware of the DevOps principles and practices and will have implemented all these principles practically for a sample application using the latest technologies on the Microsoft platform. You will be ready to start implementing DevOps within your project/engagement.
Table of Contents (20 chapters)
DevOps with Windows Server 2016
Credits
About the Author
Acknowledgments
About the Reviewer
Acknowledgments
www.PacktPub.com
Customer Feedback
Preface

DevOps practices


DevOps consists of multiple practices, each providing distinct functionality to the overall process. Figure 2 shows the relationship between them. Configuration management, continuous integration, and continuous deployment form the core practices that enable DevOps. When we deliver software services that combine these three services, we achieve continuous delivery. continuous delivery from an organization is a mature capability that depends on the maturity of its configuration management, continuous integration, and continuous deployment.

Continuous feedback at all stages forms the feedback loop that helps provide superior services to customers. It runs across all DevOps practices. Let's take a closer look at each of these capabilities and DevOps practices:

Figure 2: DevOps practices and their activities

Configuration management

Software applications and services need a physical or virtual environment on which they can be deployed. Typically, the environment is an infrastructure comprising both hardware and operating system on which software can be deployed. Software applications are decomposed into multiple services running on different servers, either on-premises or in the cloud. Each service has its own application and infrastructure configuration requirement. In short, both infrastructure and application are needed to deliver software systems to customers, and each has their own configuration. If the configuration drifts, the application might not work as expected, leading to downtime and failure. Modern ALM dictates the use of multiple stages and environments on which an application should be deployed with different configurations. For example, the application will be deployed to a development environment for developers to see the result of their work. It will also be deployed to multiple test environments, with different configurations, for executing different types of tests . It would also be deployed to a preproduction environment to conduct user acceptance tests, and finally, it will be deployed on a production environment. It is important to ensure that the application can be deployed to multiple environments without undertaking any manual changes to its configuration.

Configuration management provides a set of processes and tools which help ensure that each environment and application gets its own configuration. Configuration management tracks configuration items, and anything that changes from environment to environment should be treated as a configuration item. Configuration management also defines the relationships between configuration items and how changes in one configuration item will impact another.

Configuration management helps in the following ways:

  • Infrastructure as Code: When the process of provisioning infrastructure and its configuration is represented through code, and the same code goes through the application lifecycle process, it is known as Infrastructure as Code. Infrastructure as Code helps automate the provisioning and configuration of infrastructure. It also represents the entire infrastructure in code that can be stored in a repository and version-controlled. This allows you to use previous environment configurations when needed. It also enables the provisioning of an environment multiple times in a consistent and predictable manner. All environments provisioned in this way are consistent and equal at all stages of the ALM process.
  • Deployment and configuration of an application: The deployment and configuration of an application is the next step after provisioning the infrastructure. An example of application deployment and configuration is to deploy a WebDeploy package on a server, deploy SQL server schemas and data (bacpac) on another server, and change the SQL connection string on the web server to represent the appropriate SQL server. Configuration management stores values for the application configuration for each environment on which it is deployed.

The configuration settings applied to environments and application should also be monitored. Records for expected and desired configuration along with the differences should be maintained. Any drift from this expected and desired configuration can make the application unavailable and unreliable. Configuration management is capable of finding the drift and reconfiguring the application and environment to their desired state.

With automated configuration management in place, the team does have to manually deploy and configure the environments and applications. The operations team is not dependent on the development team for deployment activities.

Another aspect of configuration management is source code control. Software comprises code, data, and configuration. Generally, team members working on an application change the same files simultaneously. The source code should be up to date at any point in time and should only be accessible by authenticated team members. The code and other artifacts themselves are configuration. Source code control helps in increased collaboration and communication within the team, since each team member is aware of other team members activities. This ensures that conflicts are resolved at an early stage.

Continuous integration

Multiple developers write code stored and maintained in a common repository. The code is normally checked-in or pushed to the repository when a developer has finished developing a feature. This can happen in a day, or it might take days or weeks. Developers might be working together on the same feature and they might also follow the same practices of pushing/checking-in code in days or weeks. This can cause issues with code quality. One of the tenets of DevOps is to fail fast. Developers should check-in/push their code to the repository often and as soon as it makes sense to check-in. The code should be compiled frequently to check that developers have not introduced any bugs inadvertently and the complete code base can be compiled at any point of time. If a developer does not follow such practices, then there is possibility of each developer having stale code in their local workstation not integrated with other developer's code. Eventually, when such stale and large codebase is integrated from all developers, it starts failing and becomes difficult and time-consuming to fix issues arising from it.

Continuous integration solves these kinds of challenges. Continuous integration helps with the compilation and validation of any code pushed/checked-in by a developer by taking it through a series of validation steps. Continuous integration creates a process flow consisting of multiple steps and is comprised of continuous automated build and continuous automated tests. Normally, the first step is the compilation of the code. After successful compilation, each step is responsible for validating the code from a specific perspective. For example, when unit tests are executed on the compiled code, code coverage can be measured to check which code paths are covered. This could reveal whether comprehensive unit tests have been written or whether there is scope to add further unit tests. The result of continuous integration is deployment packages that can be used by continuous deployment for deployment to multiple environments.

Developers are encouraged to check-in their code multiple times a day instead of after multiple days or weeks. Continuous integration initiates the execution of the build pipeline automatically as soon as the code is checked-in or pushed. When all activities comprising the build execute successfully without any errors, the build-generated artifacts are deployed to multiple environments. Although every system demands its own configuration of continuous integration, a typical example is shown in Figure 3.

Continuous integration increases the productivity of developers. They do not have to manually compile their code, run multiple types of tests one after another, and then create packages out of it. It also reduces the risk of introducing bugs into the code. It also provides early feedback to the developers about the quality of their code. Overall, the quality of deliverables is high and deliverables are delivered faster by adopting a continuous integration practice:

Figure 3: Sample continuous integration process

Build automation

Build automation consists of multiple tasks executing in sequence. Generally, the first task is responsible for fetching the latest source code from the repository. The source code might comprise multiple projects and files, which are compiled to generate artifacts such as executables, dynamic link libraries, assemblies, and more. Successful build automation indicates that there are no compile-time errors in the code.

There can be more steps to build automation depending on the nature and type of a project.

Test automation

Test automation consists of tasks that are responsible for validating different aspects of code. These tasks are related to testing the code from a different perspective and are executed in sequence. Generally, the first step is to run a series of unit tests on the code. Unit testing refers to the process of testing the smallest denomination of a feature to validate its behavior in isolation from other features. It can be automated or manual. However, the preference is automated unit testing.

Code coverage is another aspect of automated testing that can be executed on code to find out how much of the code is executed while running the unit tests. It is generally represented as a percentage and refers to how much of the code is testable through unit testing. If code coverage is not close to 100 percent, it is either because the developer has not written unit tests for that behavior or the uncovered code is not required at all.

There can be more steps to test automation depending on the nature and type of a project. Successful execution of test automation resulting in no significant code failure should start executing the packaging tasks.

Application packaging

Packaging is a process of generating deployable artifacts such as MSI, NuGet, web-deploy packages, and database packages, as well as versioning them and storing them at a location such that they can be consumed by other pipelines and processes.

Continuous deployment

By the time the process reaches the stage of deployment, continuous integration has ensured that there is a functional application that can now be deployed to multiple environment for further quality checks and testing. Continuous Deployment refers to the capability to deploy applications and services to preproduction and production environments through automation. For example, Continuous Deployment could provision and configure an environment, deploy and configure an application on top of it. After conducting multiple validations, such as functional tests and performance tests, on a preproduction environment, the production environment is provisioned and configured, and the application is deployed to production environments through automation. There are no manual steps in the deployment process. Every deployment task is automated.

Continuous deployment should provision new environments or update existing environments. It should then deploy applications with newer configuration on top of it.

All the environments are provisioned through automation using principle of Infrastructure as Code. This will ensure that all environments, be it development, test, preproduction, production, or any other environment, are similar. Similarly, the application is deployed through automation, ensuring that it is also deployed uniformly across all environments. The configuration across these environments could be different depending the application.

Continuous deployment is generally integrated with continuous integration. When continuous integration has done its work by generating the final deployable packages, continuous deployment kicks in and start its own pipeline. This pipeline is called the release pipeline. The release pipeline consists of multiple environments, each consisting of tasks responsible for the provision of the environment, configuration of the environment, deploying applications, configuring applications, executing operational validation on environments, and testing the application on multiple environments. We will look at the release pipeline in greater detail in the next chapter and also in Chapter 10, Continuous Delivery and Deployment.

Employing continuous deployment provides immense benefits. There is a high degree of confidence in the overall deployment process, which helps ensure faster, risk-free releases on production. The chance of anything going wrong is drastically reduced. The team will have lower stress levels and rollback to a previous working environment is possible if there are issues with the current release:

Figure 4: Sample continuous deployment/release pipeline process

Although every system demands its own configuration of a release pipeline, a typical example of is shown in Figure 4. It is important to note that, generally, provisioning and configuring multiple environments is part of the release pipeline and approval should be sought before moving to the next environment. The approval process might be manual or automated, depending on the maturity of the organization.

Preproduction deployment

The release pipeline starts once drop is available from Continuous Integration. The steps it should perform is to get all the artifacts from the drop, either create a new environment from scratch or use an existing environment, deploy and configure applications on top of it. This environment can then be used for all kinds of testing and validation purpose.

Test automation

After deploying an application, a series of tests can be performed on the environment. One of the tests executed here is a functional test. Functional tests are primarily aimed at validating feature completeness and functionality of the application. These tests are written from requirements gathered from the customer. Another set of tests that can be executed are related to scalability and availability of the application. This typically includes load tests, stress tests, and performance tests. It should also include operational validation of the infrastructure environment.

Staging environment deployment

This is very similar to the test environment deployment, with only difference being that the configuration values for the environment and application will be different.

Acceptance tests

Acceptance tests are generally conducted by stakeholders of the application and can be manual or automated. This step is a validation from the customer's point of view regarding the correctness and completeness of an application's functionality.

Deployment to production

Once customers provide their approval, the same steps as those of test and staging environment deployment are executed, with the only difference being that the configuration values for the environment and application are specific to the production environment. Validation is conducted after deployment to ensure that the application is running according to expectations.

Continuous delivery

Continuous delivery and continuous deployment might sound similar to many readers; however, they are not the same. While continuous deployment talks about deployment to multiple environments and finally to a production environment through automation. Continuous delivery is the ability to generate application packages in a way that they are readily deployable in any environment. To generate artifacts that are readily deployable, continuous integration should be used to generate the application artifacts. A new or existing environment should be used to deploy these artifacts, conduct functional tests, performance tests, and user acceptance tests, through automation. Once these activities are successfully executed with no errors, the application package is referred to as readily deployable. It helps get feedback faster from both operations and the end user. This feedback can then be implemented in subsequent iterations.

Continuous learning

With all the previously mentioned DevOps practices, it is possible to create stable, robust, reliable, performant business applications and deploy them automatically to a production environment. However, the benefits of DevOps will not last for long if a continuous improvement and feedback principle is not in place. It is of utmost important that real-time feedback about the application's behavior is passed on as feedback to the development team from both end users and the operations team.

Feedback should be passed to the teams, providing relevant information about what is going well and, importantly, what is not going well.

Applications should be built with monitoring, auditing, and telemetry in mind. The architecture and design should support these. The operations team should collect telemetry information from the production environment, capture any bugs and issues, and pass this information on to the development team such that they can be fixed in subsequent releases. This process is shown in Figure 5.

Continuous learning helps make the application robust and resilient to failures. It also helps make sure that the application is meeting consumer requirements:

Figure 5: Sample continuous learning process