Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By : Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre
Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By: Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre

Overview of this book

This Learning Path takes you through the basics of Ceph all the way to gaining in-depth understanding of its advanced features. You’ll gather skills to plan, deploy, and manage your Ceph cluster. After an introduction to the Ceph architecture and its core projects, you’ll be able to set up a Ceph cluster and learn how to monitor its health, improve its performance, and troubleshoot any issues. By following the step-by-step approach of this Learning Path, you’ll learn how Ceph integrates with OpenStack, Glance, Manila, Swift, and Cinder. With knowledge of federated architecture and CephFS, you’ll use Calamari and VSM to monitor the Ceph environment. In the upcoming chapters, you’ll study the key areas of Ceph, including BlueStore, erasure coding, and cache tiering. More specifically, you’ll discover what they can do for your storage system. In the concluding chapters, you will develop applications that use Librados and distributed computations with shared object classes, and see how Ceph and its supporting infrastructure can be optimized. By the end of this Learning Path, you'll have the practical knowledge of operating Ceph in a production environment. This Learning Path includes content from the following Packt products: • Ceph Cookbook by Michael Hackett, Vikhyat Umrao and Karan Singh • Mastering Ceph by Nick Fisk • Learning Ceph, Second Edition by Anthony D'Atri, Vaibhav Bhembre and Karan Singh
Table of Contents (27 chapters)
Title Page
About Packt
Contributors
Preface
Index

Installing and configuring Ceph


To deploy our first Ceph cluster, we will use theceph-ansibletool to install and configure Ceph on all three virtual machines. Theceph-ansibletool is a part of the Ceph project, which is used for easy deployment and management of your Ceph storage cluster. In the previous section, we created three virtual machines with CentOS 7, which have connectivity with the internet over NAT, as well as private host-only networks.

We will configure these machines as Ceph storage clusters, as mentioned in the following diagram:

Creating the Ceph cluster on ceph-node1

We will first install Ceph and configure ceph-node1 as the Ceph monitor and the Ceph OSD node. Later recipes in this chapter will introduce ceph-node2 and ceph-node3.

How to do it...

Copy ceph-ansible package on ceph-node1 from the Ceph-Cookbook-Second-Edition directory. 

  1. Usevagrant as the password for the root user:
      # cd Ceph-Designing-and-Implementing-Scalable-Storage-Systems
      # scp ceph-ansible-2.2.10-38.g7ef908a.el7.noarch.rpm root@ceph-node1:/root
  1. Log in to ceph-node1 and install ceph-ansible on ceph-node1:
      [root@ceph-node1 ~]# 
      yum install ceph-ansible-2.2.10-38.g7ef908a.el7.noarch.rpm -y
  1. Update the Ceph hosts to /etc/ansible/hosts:
  1. Verify that Ansible can reach the Ceph hosts mentioned in /etc/ansible/hosts:
  1. Create a directory under the root home directory so Ceph Ansible can use it for storing the keys:

>

  1. Create a symbolic link to the Ansible group_vars directory in the /etc/ansible/ directory:
  1. Go to /etc/ansible/group_vars and copy an all.yml file from the all.yml.sample file and open it to define configuration options' values:
  1. Define the following configuration options in all.yml for the latest jewel version on CentOS 7:
  1. Go to /etc/ansible/group_vars and copy an osds.yml file from the osds.yml.sample file and open it to define configuration options' values:
  1. Define the following configuration options in osds.yml for OSD disks; we are co-locating an OSD journal in the OSD data disk:
  1. Go to /usr/share/ceph-ansible and add retry_files_save_path option in ansible.cfg in the [defaults] tag:
  1. Run Ansible playbook in order to deploy the Ceph cluster on ceph-node1:

To run the playbook, you need site.yml, which is present in the same path: /usr/share/ceph-ansible/. You should be in the /usr/share/ceph-ansible/ path and should run following commands:

      # cp site.yml.sample site.yml
      # ansible-playbook site.yml

Once playbook completes the Ceph cluster installation job and plays the recap with failed=0, it means ceph-ansible has deployed the Ceph cluster, as shown in the following screenshot:

You have all three OSD daemons and one monitor daemon up and running in ceph-node1.

Here's how you can check the Ceph jewel release installed version. You can run the ceph -v command to check the installed ceph version: