Book Image

Implementing DevOps with Ansible 2

By : Jonathan McAllister
Book Image

Implementing DevOps with Ansible 2

By: Jonathan McAllister

Overview of this book

Thinking about adapting the DevOps culture for your organization using a very simple, yet powerful automation tool, Ansible 2? Then this book is for you! In this book, you will start with the role of Ansible in the DevOps module, which covers fundamental DevOps practices and how Ansible is leveraged by DevOps organizations to implement consistent and simplified configuration management and deployment. You will then move on to the next module, Ansible with DevOps, where you will understand Ansible fundamentals and how Ansible Playbooks can be used for simple configuration management and deployment tasks. After simpler tasks, you will move on to the third module, Ansible Syntax and Playbook Development, where you will learn advanced configuration management implementations, and use Ansible Vault to secure top-secret information in your organization. In this module, you will also learn about popular DevOps tools and the support that Ansible provides for them (MYSQL, NGINX, APACHE and so on). The last module, Scaling Ansible for the enterprise, is where you will integrate Ansible with CI and CD solutions and provision Docker containers using Ansible. By the end of the book you will have learned to use Ansible to leverage your DevOps tasks.
Table of Contents (20 chapters)
Title Page
Credits
About the Author
Acknowledgments
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

DevOps in the Modern Software Organization


The solution to the issue of silos in an organization, it would seem, was to alter the culture, simplify and automate the delivery of software changes (by doing it more often), change the architecture of software solutions (away from monoliths), and pave the way for the organization to outmaneuver the competition through synergy, agility, and velocity. The idea is that if a business can deliver features that customers want faster than the competition, they will outdo their opponents.

It was for these reasons that modern DevOps approaches came to fruition. This approach also allowed incremental approaches to DevOps adoption within an organization.

The DevOps assembly line

In the infancy of computer science, computer programmers were wizards, their code was a black art, and organizations paid hefty sums to develop and release software. Oftentimes, software projects would falter and companies would go bankrupt attempting to release a software title to the market. Computer science back then was very risky and entailed long development cycles with painful integration periods and oftentimes failed releases.

In the mid 2000's Cloud computing took the world by storm. The idea of an elastic implementation of computing resources, which could scale at ease with organizations that were expanding rapidly provided a wave for the innovation of the future. By 2012 Cloud computing was a huge trend and hundreds if not thousands of companies were clamoring to get to the cloud.

As software engineering matured in the early 2000s and the widespread use of computers grew, a new software paradigm came to fruition; it was called Software as a Service (SaaS). In the past, software was shipped to customers either on CD, floppy disk, or direct onsite installations. This widely accepted pricing model was in the form of a one-time purchase. This new platform provided a subscription-based revenue model and touted an elastic and highly scalable infrastructure with promises of recurring revenue for businesses. It was known as the cloud.

With cloud computing on the rise and the software use paradigm changing dramatically, the previously accepted big bang 5 release strategy began to become antiquated. As a result of the shifting mentality in software releases, organizations could no longer wait over a year for an integration cycle to take place prior to the execution of quality assurance test plans. Nor could the business wait two years for engineering and QA to sign off on a given release. To help solve this issue, Continuous Integration was born, and the beginnings of an assembly-line system for software development began to take shape. The point of DevOps was more than just a collaborative edge within teams. The premise was in fact a business strategy to get features into customers hands more efficiently through DevOps cultural implementations.

Correlations between a DevOps assembly line and manufacturing

Prior to the Industrial Revolution, goods were mostly handcrafted and developed in small quantities. This approach limited the quantity a craftsman could create as well as the customer base they could sell their goods to. This process of handcrafting goods proved to be expensive, time-consuming, and wasteful. When Henry Ford began developing the automobile, he looked to identify a more efficient method of manufacturing goods. The result of his quest was to implement a standardization methodology and adopt a progressive assembly-line approach for developing automobiles.

In the 1980s and 90s, software engineering efforts would oftentimes drain company finances. This was the result of inefficiencies in processes, poor communication, a lack of coordinated development efforts, and an inadequate release process. Inefficiencies such as integration phases, manual quality assurance, verification release plans, and execution often added a significant amount of time to the overall development and release strategies of the business. As a way to begin mitigating these risks, new practices and processes began to take shape.

As a result of these trends, software organizations began to apply manufacturing techniques to software engineering. One of the more prevalent manufacturing concepts to be applied to software development teams is the manufacturing assembly line (also known as progressive assembly). In factories all around the world, factory assembly lines have helped organize product-creation processes and have helped ensure that, prior to shipping and delivery, manufactured goods are carefully assembled and verified. The assembly-line approach provides a level of repeatability and quantifiable verification for mass-produced products. Factories adopt the progressive assembly approach to minimize waste, maximize efficiency, and deliver products of higher quality. In recent years, software engineering organizations have begun to gravitate towards this progressive assembly-line practice to also help reduce waste, improve throughput, and release products of higher quality. From this approach, the overarching DevOps concept was born.

DevOps architectures and practices

From the DevOps movement, a set of software architectural patterns and practices have become increasingly popular. The primary logic behind the development of these architectural patterns and practices is derived from the need for scalability, no-downtime deployments, and minimizing negative customer reactions to upgrades and releases. Some of these you may have heard of (microservices), while others may be a bit vague (blue-green deployments).

In this section, we will outline some of the more popular architectures and practices to evolve from the DevOps movement and learn how they are being leveraged to provide flexibility and velocity at organizations worldwide.

Encapsulated software development

In software development, encapsulation often means different things to different people. In the context of the DevOps architecture, it simply means modularity. This is an important implementation requirement for DevOps organizations because it provides a way for components to be updated and replaced individually. Modular software is easier to develop, maintain, and upgrade than monolithic software. This applies both to the grand architectural approach as well as at the object level in object-oriented programming. If you have ever worked at a software organization that has monolithic legacy code base, you are probably quite familiar with spaghetti code or the monolithic fractal Onion Software approach. Below is a monolithic software architecture vs encapsulated architecture approach diagram:

As we can see from the above diagram, the modular organized software solution is significantly easier to understand and potentially manage than the monolithic one.

Microservices

Microservices architectures cropped up around the same time as containerization and portable virtualization. The general concept behind a microservice architecture is to architect a software system in such a way that large development groups have a simplistic way to update software through repeatable deployments, and upgrade only the parts that have changed. In some ways, microservices provide a basic constraint and solution to development sprawl to ensure that software components don't become monolithic. The general practice of upgrading only the parts that have changed might be to think of this as replacing the tires on a car instead of replacing the entire car every time the tires become worn.

A microservice development paradigm requires discipline from development personnel to ensure the structure and content of the microservice don't grow beyond its initially defined scope. As such, the basic components of a microservice are listed here:

  • Each microservice should have an API or externally facing mode of communication
  • Each microservice, where applicable, should have a unique database component
  • Each microservice should only be accessible through its API or externally facing mode of communication

So from what we've learned, microservices vs monolithic architectures could be summed up in the following basic diagram:

Continuous Integration and Continuous Delivery

Continuous Integration and Continuous Delivery, or CI->CD as they are better known in the software industry, have become a fundamental component of the DevOps movement. The implementation of these practices varies across many organizations. The implementation varies due to a significant variance in CI/CD maturity and evolution.

Continuous Integration represents a foundation for a completely automated build and deployment solution and is usually the starting point in a CI/CD quest. Continuous Integration represents a specific set of development practices, which aim to validate each change to a source-controlled software system through automation. The specific practice of CI in many regards also represents mainline software development coupled with a set of basic verification systems to ensure the commit didn't cause any code compilation issues and does not contain any known landmines.

The general practice of CI is provided here:

  1. A developer commits code changes to a source-control system mainline (Per Martin Fowler's invention of CI concepts) performed at least once a day. This is to ensure that code is collaborated on EVEN if they are incomplete.
  2. An automation system detects the check-in and validates that the code can be compiled (syntax check).
  3. The same automation system executes a set of unit tests against the newly updated code base.
  4. The system notifies the committer if there are any identifiable defects related to the check-in.

If at the end of the CI cycle for a given commit there exist any identifiable defects, the committer has two potential options:

  1. Fix the issue quickly.
  2. Revert the change from the source control (to ensure the system is in a known working state).

While the practice of CI may sound quite easy, in many ways, it's quite difficult for development organizations to implement. This is usually related to the cultural atmosphere of the team and organization.

Note

It is worth noting the source for CI mentioned here comes from Martin Folwer and James Shore. These software visionaries were instrumental in creating and advocating CI implementations and solid development practices. This is also the base platform required for Continuous Delivery, which was created by Jez Humble in 2012.

Continuous Delivery represents a continuation of CI and requires CI as a foundational starting point. Continuous Delivery aims to start by validating each committed change to a software system through the basic CI process described earlier. The main addition that Continuous Delivery offers is that, once the validation of the code change is completed, the CD system will deploy (install) the software onto a mock environment and perform additional testing as a result.

The Continuous Delivery practice aims to provide instant feedback to developers on the quality of their commit and the potential reliability of their code base. The end goal is to keep the software in a releasable form at all times. When implemented correctly, CI/CD provides significant business value to the organization and can help reduce wasted development cycles debugging complex merges and commits that don't actually work or provide business value.

Based on what we described previously, Continuous Delivery has the following basic flow of operations:

  • User commits code to source-control mainline
  • Automated CI process detects the change
  • Automated CI process builds/syntax-checks the code base for compilation issues
  • Automated CI process creates a uniquely versioned deployable package
  • Automated CI process pushes the package to an artifact repository
  • Automated CD process pulls the package onto a given environment
  • Automated CD process deploys/installs the package onto the environment
  • Automated CD process executes a set of automated tests against the environment
  • Automated CD process reports any failures
  • Automated CD process deploys the package onto additional environments
  • Automated CD process allows additional manual testing and validation

Note

In a Continuous Delivery implementation, not every change automatically goes into production, but instead the principles of Continuous Delivery offer a releasable at anytime software product. The idea is that the software COULD be pushed into production at any moment but isn't necessarily always done so.

Generally, the CI/CD process flow would look like this:

  • Continuous Integration:
  • Flow of Continuous Delivery:
  • Components of Continuous Delivery:

Modularity

Microservices and modularity are similar in nature but not entirely the same. The basic concept of modularity is to avoid creating a monolithic implementation of a software system. A monolithic software system is inadvertently developed in such a way that components are tightly coupled and have heavy reliance on each other, so much so that the effect of updating one component requires the updating of many others just to improve functionality or alleviate the presence of a defect.

Monolithic software development implementations are most common in legacy code bases that were poorly designed or rushed through the development phase. They can often result in brittle software functionality and force the business to continue to spend significant amounts of time updating and maintaining the code base.

On the other hand, a modular software system has a neatly encapsulated set of modules, which can be easily updated and maintained due to the lack of tightly coupled components. Each component in a modular software system provides a generally self-reliant piece of functionality and can be swapped out for a replacement in a much more efficient manner.

Horizontal scalability

Horizontal scaling is an approach to software delivery that allows larger cloud-based organizations to spin up additional instances of a specific services in a given environment. The traffic incoming to this service would then be load-balanced across the instances to provide consistent performance for the end user. Horizontally scaling an application must be approached during the design and development phase of the SDLC and requires a level of discipline on the developer's part.

Blue-green deployments

Blue-green is a development and deployment concept requiring two copies of the product, one called blue and other green, with one copy being the current release of the product. The other copy is in active development to become the next release as soon as it is deemed fit for production. Another benefit of using this development/deployment model is the ability to roll back to the previous release should the need arise. Blue-green deployments are vital to the concept of CI because, without the future release being developed in conjunction with the current release, hotfixes and fire/damage control become the norm, with innovation and overall focus suffering as a result.

Blue-green deployments specifically allow zero-downtime deployment to take place and for rollbacks to occur seamlessly (since the previous instance was never destroyed). Some very notable organizations have successfully implemented blue-green deployments. These companies include:

  • Netflix
  • Etsy
  • Facebook
  • Twitter
  • Amazon

As a result of blue-green deployments, there have been some very notable successes within the DevOps world that have minimized the risk of deployment and increased the stability of the software systems.

Artifact management and versioning

Artifact management plays a pivotal role in a DevOps environment. The artifact-management solution provides a single source of truth for all things deployable. In addition to that, it provides a way for the automation system to shrink-wrap a build or potential release candidate and ensure it doesn't get tampered with after the initial build. In many ways, an artifact-management system is to binaries what source control is to source code.

In the software industry, there are many options for artifact management. Some of these are free to use and others require the purchase of a specific tool. Some of the more popular options include:

Now that we have a basic understanding of artifact management, let's take a look at how an artifact repository fits into the general workflow of a DevOps-oriented environment. A diagram depicting this solution's place within a DevOps-oriented environment is provided next:

Symmetrical environments

In a rapid-velocity deployment environment (where changes are pushed through a delivery pipeline rapidly), it is absolutely critical that any pre-production and production environments maintain a level of symmetry. That is to say, the deployment procedures and resulting installation of a software system are identical in every way possible among environments. For example, an organization may have the following environments:

  • Development: Here, developers can test their changes and integration tactics. This environment acts as a playground for all things development oriented and provides developers with an area to validate their code changes and test the resulting impact.
  • Quality-assurance environment: This environment comes after the development environment and provides QA personnel with a location to test and validate the code and resulting installation. This environment is usually released as a precursor environment, and the environment will need to pass stricter quality standards prior to a sign-off on a given build for release.
  • Stage: This environment represents the final location prior to production, where all automated deployment techniques are validated and tested.
  • Production: This environment represents the location where users/customers are actually working with the live install.