Book Image

HashiCorp Packer in Production

By : John Boero
Book Image

HashiCorp Packer in Production

By: John Boero

Overview of this book

Creating machine images can be time-consuming and error-prone when done manually. HashiCorp Packer enables you to automate this process by defining the configuration in a simple, declarative syntax. This configuration is then used to create machine images for multiple environments and cloud providers. The book begins by showing you how to create your first manifest while helping you understand the available components. You’ll then configure the most common built-in builder options for Packer and use runtime provisioners to reconfigure a source image for desired tasks. You’ll also learn how to control logging for troubleshooting errors in complex builds and explore monitoring options for multiple logs at once. As you advance, you’ll build on your initial manifest for a local application that’ll easily migrate to another builder or cloud. The chapters also help you get to grips with basic container image options in different formats while scaling large builds in production. Finally, you’ll develop a life cycle and retention policy for images, automate packer builds, and protect your production environment from nefarious plugins. By the end of this book, you’ll be equipped to smoothen collaboration and reduce the risk of errors by creating machine images consistently and automatically based on your defined configuration.
Table of Contents (18 chapters)
Part 1: Packer’s Beginnings
Part 2: Managing Large Environments
Part 3: Advanced Customized Packer

Consuming HCP Packer from Terraform

Terraform is HashiCorp’s tool for infrastructure as code. Terraform is largely out of scope for this book, but we will at least cover how to use HCP Packer to deploy your images registered with HCP. The good news is that Packer’s data source type actually started out as a Terraform feature. The key feature in Terraform is simply being able to look up the IDs of the AMIs built with Packer:

data "hcp_packer_image_version" "gpu-test" {
  bucket_name = "al-gpu"
  channel     = "test"

If this looks familiar, it’s because the Packer data source and the Terraform data source use the same syntax and the same API behind the scenes. Now, as you update your release channels in HCP Packer, those changes will automatically be picked up by subsequent Terraform runs:

data "hcp_packer_image" "amiid" {
  bucket_name ...