Book Image

Learning PowerShell DSC - Second Edition

By : James Pogran
Book Image

Learning PowerShell DSC - Second Edition

By: James Pogran

Overview of this book

The main goal of this book is to teach you to configure, deploy, and manage your system using the new features of PowerShell v5/v6 DSC. This book begins with the basics of PowerShell Desired State Configuration, covering its architecture and components. It familiarizes you with the set of Windows PowerShell language extensions and new Windows PowerShell commands that make up DSC. Then it helps you create DSC custom resources and work with DSC configurations with the help of practical examples. Finally, it describes how to deploy configuration data using PowerShell DSC. Throughout this book, we will be focusing on concepts such as building configurations with parameters, the local configuration manager, and testing and restoring configurations using PowerShell DSC. By the end of the book, you will be able to deploy a real-world application end-to-end and will be familiar enough with the powerful Desired State Configuration platform to achieve continuous delivery and efficiently and easily manage and deploy data for systems.
Table of Contents (9 chapters)

Why do we need configuration management?

Whether you manage a few servers or several thousands, the traditional methods of server and software installation and deployment are failing to address your current needs. These methods treat servers as special singular entities that have to be protected and taken care of, with special configurations that may or may not be documented, and if they go down, they take the business with them.

For a long while, this worked out. But as the number of servers and applications and configuration points grows, it becomes untenable to keep it all in your head or consistently documented by a set of people. New patches that are released, feature sets change, employee turnover, poorly documented software—all these reasons introduce variance and change into the system. If not accounted for and handled, these special servers become ticking time bombs that will explode the moment a detail is missed.

Written installation or configuration specifications that have to be performed by humans and be error-free time and time again on numerous servers are increasingly self-evident as brittle and error-prone affairs. To further complicate things, despite the obvious interdependence of software development and other IT-related departments, the software developers are often isolated from the realities faced by IT professionals during the deployment and maintenance of the software.

The answer to this is automation: defining a repeatable process that configures servers the right way every time. Servers move from being special snowflakes to being disposable numbers on a list that can be created and destroyed without requiring someone to remember the specific incantation to make it work. Instead of a golden image that has to be kept up to date with all the complexities of image storage and distribution, there is instead a set of steps to bring all servers to compliance regardless of whether they are a fresh installation or a number of years old.

What is being described is Configuration Management (CM). CM ensures that the current design and build state of a system is a known good state. It ensures trust by not relying on the knowledge of one person or a team of people; it's an objective truth that can be verified at any time. It provides a historical record of what was changed as well, which is useful not only for reporting purposes (such as for management), but also for troubleshooting purposes (this file used to be there, now it's not). CM detects variance between builds, so changes to the environment are both easily apparent and well known to all who work on the system.

This allows anyone to see what the given state of the system is at any time, at any granularity, whether on one system or over the span of thousands. If a target system fails, it's a matter of re-running the CM build on a fresh installation to bring the system back to a steady state.

CM is part of a set of ideas called Infrastructure as Code. It requires that every step in provisioning an environment be automated and written down in files that can be run any time to bring the environment to a known good state. While CM is infrastructure automation (replicating steps multiple times on any amount of target nodes), Infrastructure as Code takes things one step further and codifies every step required to get an entire environment running. It encompasses the knowledge of server provisioning, server configuration, and server deployment into a format that is readable by sysadmins, developers, and other technical staff. Like CM, Infrastructure as Code uses existing best practices from software development, such as source control, automated code testing, and continuous integration, to ensure a redundant and repeatable process.

The approaches being described are not that new and are part of a larger movement that has been slowly accepted among companies as the optimal way of managing servers and software, called DevOps.

What is DevOps?

The set of concepts we have been describing is collectively termed DevOps and is a part of a larger process called continuous delivery. DevOps is a shortened form of development operations and describes a close working relationship between the development of software and the deployment and operation use of that software. Continuous delivery is a set of practices that enables software to be developed and continuously deployed to production systems on a frequent basis, usually in an automatic fashion that happens multiple times a week or day.

Each year, a company called Puppet Labs surveys over 4,000 IT operations professionals and developers about their operations procedures. Of those surveyed, companies that have implemented DevOps practices have reported improved software deployment quality and more frequent software releases. Their report states that these companies shipped code 30 times faster and completed those deployments 8,000 times faster than their peers. They had 50% fewer failures and restored service 12 times faster than their peers.

Results such as the ones shown in the Puppet Labs survey show that organizations adopting DevOps are up to five times more likely to be high performing than those who have not. It's a cumulative effect; the longer you practice, the greater the results from adoption and the easier it is to continue doing that. How DevOps enables high performance is something that centers around deployment frequency.

To define and explain the entirety of DevOps and, since continuous delivery is out of the scope of this book, for the purposes here the goals can be summarized as follows: to improve the deployment frequency, lower the failure rate of new releases, and shorten the recovery time if a new release is faulty. Even though the term implies strict developer and operations roles as the only ones involved, the concept really applies to any person or department involved in the developing, deployment, and maintenance of the product and the servers it runs on.

These goals work toward one end: minimizing the risk of software deployment by making changes safe through automation. The root cause of poor quality is variation, whether that is in the system, software settings, or in the processes performing actions on the system or software. The solution to variation is repeatability. By figuring out how to perform an action repeatedly, you have removed the variation from the process and can continually make small changes to the process without causing unforeseen problems.