Book Image

OpenStack for Architects

By : Michael Solberg, Benjamin Silverman
Book Image

OpenStack for Architects

By: Michael Solberg, Benjamin Silverman

Overview of this book

Over the last five years, hundreds of organizations have successfully implemented Infrastructure as a Service (IaaS) platforms based on OpenStack. The huge amount of investment from these organizations, industry giants such as IBM and HP, as well as open source leaders such as Red Hat have led analysts to label OpenStack as the most important open source technology since the Linux operating system. Because of its ambitious scope, OpenStack is a complex and fast-evolving open source project that requires a diverse skill-set to design and implement it. This guide leads you through each of the major decision points that you'll face while architecting an OpenStack private cloud for your organization. At each point, we offer you advice based on the experience we've gained from designing and leading successful OpenStack projects in a wide range of industries. Each chapter also includes lab material that gives you a chance to install and configure the technologies used to build production-quality OpenStack clouds. Most importantly, we focus on ensuring that your OpenStack project meets the needs of your organization, which will guarantee a successful rollout.
Table of Contents (14 chapters)
OpenStack for Architects
Credits
About the Authors
www.PacktPub.com
Customer Feedback
Preface

Your first OpenStack deployment


In our experience, almost all organizations approach OpenStack with the following three steps:

  1. An individual, usually a Linux or Cloud Architect, installs OpenStack on a single machine to verify that the software can be deployed without too much effort.

  2. The Architect enlists the help of other team members, typically Network and Storage Architects or Engineers to deploy a multiple-node installation. This will leverage some kind of shared ephemeral or block storage.

  3. A team of Architects or Engineers craft the first deployment of OpenStack which is customized for the organization's use cases or environmental concerns. Professional services from a company such as Red Hat, Mirantis, HP, IBM, Canonical or Rackspace are often engaged at this point in the process.

From here on out, it's off to the races. We'll follow a similar pattern in this book. In this first chapter, we'll start with the first step-the "all-in-one" deployment.

Writing the initial deployment plan

Taking the time to document the very first deployment might seem a bit obsessive, but it provides us with the opportunity to begin iterating on the documentation that is the key to successful OpenStack deployments. We'll start with the following template.

Hardware

The initial deployment of OpenStack will leverage a single commodity server, a HP DL380.

Hostname

Model

CPU cores

Memory

Disk

Network

openstack

DL380

16

256 GB

500 GB

2 x 10 GB

This deployment provides compute capacity for 60 m1.medium instances or 30 m1.large instances.

Change the specifications in the table to meet your deployment. It's important to specify the expected capacity in the deployment document. For a basic rule of thumb, just divide the amount of available system memory by the instance memory. We'll talk more about accurately forecasting capacity in a later chapter.

Network addressing

There is one physical provider network in this deployment. SDN is provided in the tenant space by Neutron with the OVS ML2 plugin.

Hostname

MAC

IP

openstack

3C:97:0E:BF:6C:78

192.168.0.10

Change the network addresses in this section to meet your deployment. We'll only use a single network interface for the all-in-one installation.

Configuration notes

This deployment will use the RDO all-in-one reference architecture. This reference architecture uses a minimum amount of hardware as the basis for a monolithic installation of OpenStack, typically only used for testing or experimentation. For more information on the all-in-one deployment, refer to https://www.rdoproject.org/Quickstart.

For the first deployment, we'll just use the RDO distribution of the box. In later chapters, we'll begin to customize our deployment and we'll add notes to this section to describe where we've diverged from the reference architecture.

Requirements

The host system will need to meet the following requirements prior to deployment:

  • Red Hat Enterprise Linux 7 (or CentOS 7)

  • Network Manager must be disabled

  • Network interfaces must be configured as per the Network Addressing section in /etc/sysconfig/network-scripts

  • The RDO OpenStack repository must be enabled (from https://rdoproject.org/)

To enable the RDO repository, run the following command as the root user on your system:

yum install -y https://rdoproject.org/repos/rdo-release.rpm

Installing OpenStack

Assuming that we've correctly configured our host machine as per our deployment plan, the actual deployment of OpenStack is relatively straightforward. The installation instructions can either be captured in an additional section of the deployment plan or they can be captured in a separate document-the Installation Guide. Either way, the installation instructions should be immediately followed by a set of tests that can be run to verify that the deployment went correctly.

Installation instructions

To install OpenStack, execute the following command as the root user on the system designated in the deployment plan:

# yum install -y openstack-packstack

This command will install the packstack installation utility on the machine. If this command fails, ensure that the RDO repository is correctly enabled using the following command:

# rpm -q rdo-release

If the RDO repository has not been enabled, enable it using the following command:

# yum install -y https://rdoproject.org/repos/rdo-release.rpm

Next, run the packstack utility to install OpenStack:

# packstack --allinone

The packstack utility configures and applies a set of puppet manifests to your system to install and configure the OpenStack distribution. The allinone option instructs packstack to configure the set of services defined in the reference architecture for RDO.

Verifying the installation

Once the installation has completed successfully, use the following steps to verify the installation.

First, verify the Keystone identity service by attempting to get an authorization token. The OpenStack command-line client uses a set of environment variables to authenticate your session. Two configuration files which set those variables will be created by the packstack installation utility.

The keystonerc_admin file can be used to authenticate an administrative user and the keystonerc_demo file can be used to authenticate a nonprivileged user. An example keystonerc is shown as follows:

export OS_USERNAME=demo 
export OS_TENANT_NAME=demo 
export OS_PASSWORD=<random string> 
export OS_AUTH_URL=http://192.168.0.10:5000/v2.0/ 
export PS1='[\u@\h \W(keystone_demo)]\$ ' 

This file will be used to populate your command-line session with the necessary environment variables and credentials that will allow you to communicate with the OpenStack APIs that use the Keystone service for authentication.

In order to use the keystonerc file to load your credentials, source the contents into your shell session from the directory you ran the packstack command. It will provide no output except for a shell prompt change:

# . ./keystonerc_demo

Your command prompt will change to remind you that you're using the sourced OpenStack credentials.

In order to load these credentials, the preceding source command must be run every time a user logs in. These credentials are not persistent. If you do not source your credentials before running OpenStack commands, you will most likely get the following error:

You must provide a username via either --os-username or   
env[OS_USERNAME]

To verify the Keystone service, run the following command to get a Keystone token:

# openstack token issue

The output of this command should be a table similar to the following one:

+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-07-14T05:01:41Z       |
|     id    | a20264cd091847ac965cde8cbba7b0b9 |
| tenant_id | 202bd2fa2a3a40639bb0bccc9a57e37d |
|  user_id  | 68d90544e0064c4c838d47d80811b895 |
+-----------+----------------------------------+

Next, verify the Glance image service:

# openstack image list

This should output a table listing a single image, the CirrOS image that is installed with the packstack command. We'll use the ID of that glance image to verify the Nova Compute service. Before we do that, we'll verify the Neutron Network service:

# openstack network list

This should output a table listing a network available to use for testing. We'll use the ID of that network to verify the Nova Compute service with the following commands:

First, add root's SSH key to OpenStack as demo.key:

# openstack keypair create --public-key ~/.ssh/id_rsa.pub demo

Now, create an instance called instance01:

# openstack server create --flavor m1.tiny \
--image <image_id> \
--key-name demo
--nic net-id=<networkid> \
instance01

This command will create the instance and output a table of information about the instance that you've just created. To check the status of the instance as it is provisioned, use the following command:

# openstack server show instance01

When the status becomes ACTIVE, the instance has successfully launched. The key created with the nova keypair-add command (demo.key) can be used to log into the instance once its running.

Next steps

At this point, you should have a working OpenStack installation on a single machine. To familiarize yourself with the OpenStack Horizon user interface, see the documentation on the RDO project website at https://www.rdoproject.org/Running_an_instance.