Book Image

Extending Puppet - Second Edition

By : Alessandro Franceschi, Jaime Soriano Pastor
Book Image

Extending Puppet - Second Edition

By: Alessandro Franceschi, Jaime Soriano Pastor

Overview of this book

Puppet has changed the way we manage our systems, but Puppet itself is changing and evolving, and so are the ways we are using it. To tackle our IT infrastructure challenges and avoid common errors when designing our architectures, an up-to-date, practical, and focused view of the current and future Puppet evolution is what we need. With Puppet, you define the state of your IT infrastructure, and it automatically enforces the desired state. This book will be your guide to designing and deploying your Puppet architecture. It will help you utilize Puppet to manage your IT infrastructure. Get to grips with Hiera and learn how to install and configure it, before learning best practices for writing reusable and maintainable code. You will also be able to explore the latest features of Puppet 4, before executing, testing, and deploying Puppet across your systems. As you progress, Extending Puppet takes you through higher abstraction modules, along with tips for effective code workflow management. Finally, you will learn how to develop plugins for Puppet - as well as some useful techniques that can help you to avoid common errors and overcome everyday challenges.
Table of Contents (19 chapters)
Extending Puppet Second Edition
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface
Index

Installing and configuring Puppet


Puppet uses a client-server paradigm. Clients (also called agents) are installed on all the systems to be managed, and the server(s) (also called Master) is installed on a central machine(s) from where we control the whole infrastructure.

We can find Puppet's packages on most recent OS, either in the default repositories or in other ones maintained by the distribution or its community (for example, EPEL for Red Hat derivatives).

Starting with Puppet version 4.0, Puppet Labs introduced Puppet Collections. These collections are repositories containing packages that can be used between them. When using collections, all nodes in the infrastructure should be using the same one. As a general rule, Puppet agents are compatible with newer versions of Puppet Master, but Puppet 4 breaks compatibility with previous versions.

To look for the more appropriate packages for our infrastructure, we should use Puppet Labs repositories.

The server package is called puppetserver, so to install it we can use these commands:

apt-get install puppetserver # On Debian derivatives
yum install puppetserver # On Red Hat derivatives

And similarly, to install the agent:

apt-get install puppet-agent # On Debian derivatives
yum install puppet-agent # On Red Hat derivatives

Note

To install Puppet on other operating systems, check out http://docs.puppetlabs.com/guides/installation.html.

In versions before 4.0, the agent package was called puppet, and the server package was called puppetmaster on Debian-based distributions and puppet-server, on Red Hat, derivated distributions.

This will be enough to start the services with the default configuration, but the commands are not installed in any of the usual standard paths for binaries, we can find them under /opt/puppetlabs/puppet/bin, we'd need to add it to our PATH environment variable if we want to use them without having to write the full path.

Configuration files are placed in versions before 4.0 in /etc/puppet, and from 4.0 in /etc/puppetlabs/ as well as the configuration of other Puppet Labs utilities as Mcollective. Inside the puppetlabs directory, we can find the Puppet one, that contains the puppet.conf file; this file is used by both agents and server, and includes the parameters for some directories used in runtime and specific information for the agent, for example, the server to be used. The file is divided in [sections] and has an INI-like format. Here is the content just installed:

[master]
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code

[agent]
server = puppet

A very useful command to see all the current client configuration settings is:

puppet config print all

The server has additional configuration in the puppetmaster/conf.d directory in files in the HOCON format, a human-readable variation of JSON format. Some of these files are as follows:

  • global.conf: This contains global settings; their defaults are usually fine

  • webserver.conf: This contains webserver settings, such as the port and the listening address

  • puppetserver.conf: This contains the settings used by the server

  • ca.conf: This contains the settings for the Certificate Authority service

Configurations in previous files refer to other files on some occasions, which are also important to know when we work with Puppet:

  • Logs: They are in /var/log/puppetlabs (but also on normal syslog files, with facility daemon), both for agents and servers

  • Puppet operational data: This is placed in /opt/puppetlabs/server/data/puppetserver

  • SSL certificates: They are stored in /opt/puppetlabs/puppet/ssl. By default, the agent tries to contact a server hostname called puppet, so either name our server puppet.$domain or provide the correct name in the server parameter.

  • Agent certificate name: When the agent communicates with the server, it presents itself with its certname (is also the hostname placed in its SSL certificates). By default, the certname is the fully qualified domain name (FQDN) of the agent's system.

  • The catalog: This is the configuration fetched by the agent from the server. By default, the agent daemon requests that every 30 minutes. Puppet code is placed, by default, under /etc/puppetlabs/code.

  • SSL certificate requests: On the Master, we have to sign each client's certificates request (manually by default). If we can cope with the relevant security concerns, we may automatically sign them by adding their FQDNs (or rules matching them) to the autosign.conf file, for example, to automatically sign the certificates for node.example.com and all nodes whose hostname is a subdomain for servers.example.com:

    node.example.com
    *.servers.example.com

However, we have to take into account that any server can request configuration with any FQDN so this is potentially a security flaw.