Book Image

Ceph Cookbook - Second Edition

By : Vikhyat Umrao, Karan Singh, Michael Hackett
Book Image

Ceph Cookbook - Second Edition

By: Vikhyat Umrao, Karan Singh, Michael Hackett

Overview of this book

Ceph is a unified distributed storage system designed for reliability and scalability. This technology has been transforming the software-defined storage industry and is evolving rapidly as a leader with its wide range of support for popular cloud platforms such as OpenStack, and CloudStack, and also for virtualized platforms. Ceph is backed by Red Hat and has been developed by community of developers which has gained immense traction in recent years. This book will guide you right from the basics of Ceph , such as creating blocks, object storage, and filesystem access, to advanced concepts such as cloud integration solutions. The book will also cover practical and easy to implement recipes on CephFS, RGW, and RBD with respect to the major stable release of Ceph Jewel. Towards the end of the book, recipes based on troubleshooting and best practices will help you get to grips with managing Ceph storage in a production environment. By the end of this book, you will have practical, hands-on experience of using Ceph efficiently for your storage requirements.
Table of Contents (15 chapters)

Installing and configuring Ceph

To deploy our first Ceph cluster, we will use the ceph-ansible tool to install and configure Ceph on all three virtual machines. The ceph-ansible tool is a part of the Ceph project, which is used for easy deployment and management of your Ceph storage cluster. In the previous section, we created three virtual machines with CentOS 7, which have connectivity with the internet over NAT, as well as private host-only networks.

We will configure these machines as Ceph storage clusters, as mentioned in the following diagram:

Creating the Ceph cluster on ceph-node1

We will first install Ceph and configure ceph-node1 as the Ceph monitor and the Ceph OSD node. Later recipes in this chapter will introduce ceph-node2 and ceph-node3.

How to do it...

Copy ceph-ansible package on ceph-node1 from the Ceph-Cookbook-Second-Edition directory.

  1. Use vagrant as the password for the root user:
      # cd Ceph-Cookbook-Second-Edition
# scp ceph-ansible-2.2.10-38.g7ef908a.el7.noarch.rpm root@ceph-node1:/root
  1. Log in to ceph-node1 and install ceph-ansible on ceph-node1:
      [root@ceph-node1 ~]# 
yum install ceph-ansible-2.2.10-38.g7ef908a.el7.noarch.rpm -y
  1. Update the Ceph hosts to /etc/ansible/hosts:
  1. Verify that Ansible can reach the Ceph hosts mentioned in /etc/ansible/hosts:
  1. Create a directory under the root home directory so Ceph Ansible can use it for storing the keys:
>
  1. Create a symbolic link to the Ansible group_vars directory in the /etc/ansible/ directory:
  1. Go to /etc/ansible/group_vars and copy an all.yml file from the all.yml.sample file and open it to define configuration options' values:
  1. Define the following configuration options in all.yml for the latest jewel version on CentOS 7:
  1. Go to /etc/ansible/group_vars and copy an osds.yml file from the osds.yml.sample file and open it to define configuration options' values:
  1. Define the following configuration options in osds.yml for OSD disks; we are co-locating an OSD journal in the OSD data disk:
  1. Go to /usr/share/ceph-ansible and add retry_files_save_path option in ansible.cfg in the [defaults] tag:
  1. Run Ansible playbook in order to deploy the Ceph cluster on ceph-node1:

To run the playbook, you need site.yml, which is present in the same path: /usr/share/ceph-ansible/. You should be in the /usr/share/ceph-ansible/ path and should run following commands:

      # cp site.yml.sample site.yml
# ansible-playbook site.yml

Once playbook completes the Ceph cluster installation job and plays the recap with failed=0, it means ceph-ansible has deployed the Ceph cluster, as shown in the following screenshot:

You have all three OSD daemons and one monitor daemon up and running in ceph-node1.

Here's how you can check the Ceph jewel release installed version. You can run the ceph -v command to check the installed ceph version: