Book Image

Ceph Cookbook

Book Image

Ceph Cookbook

Overview of this book

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability. This cutting-edge technology has been transforming the storage industry, and is evolving rapidly as a leader in software-defined storage space, extending full support to cloud platforms such as Openstack and Cloudstack, including virtualization platforms. It is the most popular storage backend for Openstack, public, and private clouds, so is the first choice for a storage solution. Ceph is backed by RedHat and is developed by a thriving open source community of individual developers as well as several companies across the globe. This book takes you from a basic knowledge of Ceph to an expert understanding of the most advanced features, walking you through building up a production-grade Ceph storage cluster and helping you develop all the skills you need to plan, deploy, and effectively manage your Ceph cluster. Beginning with the basics, you’ll create a Ceph cluster, followed by block, object, and file storage provisioning. Next, you’ll get a step-by-step tutorial on integrating it with OpenStack and building a Dropbox-like object storage solution. We’ll also take a look at federated architecture and CephFS, and you’ll dive into Calamari and VSM for monitoring the Ceph environment. You’ll develop expert knowledge on troubleshooting and benchmarking your Ceph storage cluster. Finally, you’ll get to grips with the best practices to operate Ceph in a production environment.
Table of Contents (18 chapters)
Ceph Cookbook
Credits
Foreword
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Installing and configuring Ceph


To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster. In the previous section, we created three virtual machines with CentOS7, which have connectivity with the Internet over NAT, as well as private host-only networks.

We will configure these machines as Ceph storage clusters, as mentioned in the following diagram:

Creating Ceph cluster on ceph-node1

We will first install Ceph and configure ceph-node1 as the Ceph monitor and the Ceph OSD node. Later recipes in this chapter will introduce ceph-node2 and ceph-node3.

How to do it…

  1. Install ceph-deploy on ceph-node1:

    # yum install ceph-deploy -y
    
  2. Next, we will create a Ceph cluster using ceph-deploy by executing the following command from ceph-node1:

    # mkdir /etc/ceph ; cd /etc/ceph
    # ceph-deploy new ceph-node1
    

    The new subcommand in ceph-deploy deploys a new cluster with ceph as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf and ceph.mon.keyring files:

  3. To install Ceph software binaries on all the machines using ceph-deploy, execute the following command from ceph-node1:

    # ceph-deploy install ceph-node1 ceph-node2 ceph-node3
    

    The ceph-deploy tool will first install all the dependencies followed by the Ceph Giant binaries. Once the command completes successfully, check the Ceph version and Ceph health on all the nodes, as shown as follows:

    # ceph -v
    
  4. Create first the Ceph monitor in ceph-node1:

    # ceph-deploy mon create-initial
    

    Once the monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage:

    # ceph -s
    
  5. Create OSDs on ceph-node1:

    1. List the available disks on ceph-node1:

      # ceph-deploy disk list ceph-node1
      

      From the output, carefully select the disks (other than the OS partition) on which we should create the Ceph OSD. In our case, the disk names are sdb, sdc, and sdd.

    2. The disk zap subcommand would destroy the existing partition table and content from the disk. Before running this command, make sure that you are using the correct disk device name:

      # ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
      
    3. The osd create subcommand will first prepare the disk, that is, erase the disk with a filesystem that is xfs by default, and then it will activate the disk's first partition as data partition and its second partition as journal:

      # ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
      
    4. Check the Ceph status and notice the OSD count. At this stage, your cluster would not be healthy; we need to add a few more nodes to the Ceph cluster so that it can replicate objects three times (by default) across cluster and attain healthy status. You will find more information on this in the next recipe:

      # ceph -s