Book Image

Ceph Cookbook

Book Image

Ceph Cookbook

Overview of this book

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability. This cutting-edge technology has been transforming the storage industry, and is evolving rapidly as a leader in software-defined storage space, extending full support to cloud platforms such as Openstack and Cloudstack, including virtualization platforms. It is the most popular storage backend for Openstack, public, and private clouds, so is the first choice for a storage solution. Ceph is backed by RedHat and is developed by a thriving open source community of individual developers as well as several companies across the globe. This book takes you from a basic knowledge of Ceph to an expert understanding of the most advanced features, walking you through building up a production-grade Ceph storage cluster and helping you develop all the skills you need to plan, deploy, and effectively manage your Ceph cluster. Beginning with the basics, you’ll create a Ceph cluster, followed by block, object, and file storage provisioning. Next, you’ll get a step-by-step tutorial on integrating it with OpenStack and building a Dropbox-like object storage solution. We’ll also take a look at federated architecture and CephFS, and you’ll dive into Calamari and VSM for monitoring the Ceph environment. You’ll develop expert knowledge on troubleshooting and benchmarking your Ceph storage cluster. Finally, you’ll get to grips with the best practices to operate Ceph in a production environment.
Table of Contents (18 chapters)
Ceph Cookbook
Credits
Foreword
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Scaling up your Ceph cluster


At this point, we have a running Ceph cluster with one MON and three OSDs configured on ceph-node1. Now, we will scale up the cluster by adding ceph-node2 and ceph-node3 as MON and OSD nodes.

How to do it…

A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster:

  1. Add a public network address to the /etc/ceph/ceph.conf file on ceph-node1:

    public network = 192.168.1.0/24
    
  2. From ceph-node1, use ceph-deploy to create a monitor on ceph-node2:

    # ceph-deploy mon create ceph-node2
    
  3. Repeat this step to create a monitor on ceph-node3:

    # ceph-deploy mon create ceph-node3
    
  4. Check the status of your Ceph cluster; it should show three monitors in the MON section:

    # ceph -s
    # ceph mon stat
    

    You will notice that your Ceph cluster is currently showing HEALTH_WARN; this is because we have not configured any OSDs other than ceph-node1. By default, the date in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes. Now, we will configure OSDs on ceph-node2 and ceph-node3:

  5. Use ceph-deploy from ceph-node1 to perform a disk list, disk zap, and OSD creation on ceph-node2 and ceph-node3:

    # ceph-deploy disk list ceph-node2 ceph-node3
    # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
    # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
    # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
    # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
    
  6. Since we have added more OSDs, we should tune pg_num and the pgp_num values for the rbd pool to achieve a HEALTH_OK status for our Ceph cluster:

    # ceph osd pool set rbd pg_num 256
    # ceph osd pool set rbd pgp_num 256
    

    Tip

    Starting the Ceph Hammer release, rbd is the only default pool that gets created. Ceph versions before Hammer creates three default pools: data, metadata, and rbd.

  7. Check the status of your Ceph cluster; at this stage, your cluster will be healthy.