Book Image

Mastering Ceph

By : Nick Fisk
Book Image

Mastering Ceph

By: Nick Fisk

Overview of this book

Mastering Ceph covers all that you need to know to use Ceph effectively. Starting with design goals and planning steps that should be undertaken to ensure successful deployments, you will be guided through to setting up and deploying the Ceph cluster, with the help of orchestration tools. Key areas of Ceph including Bluestore, Erasure coding and cache tiering will be covered with help of examples. Development of applications which use Librados and Distributed computations with shared object classes are also covered. A section on tuning will take you through the process of optimisizing both Ceph and its supporting infrastructure. Finally, you will learn to troubleshoot issues and handle various scenarios where Ceph is likely not to recover on its own. By the end of the book, you will be able to successfully deploy and operate a resilient high performance Ceph cluster.
Table of Contents (12 chapters)

How to use BlueStore

To create a BlueStore OSD, you can use ceph-disk that fully supports creating BlueStore OSDs with either the RocksDB data and WAL collocated or stored on separate disks. The operation is similar to when creating a filestore OSD except instead of specifying a device for use as the filestore journal, you specify devices for the RocksDB data. As previously mentioned, you can separate the DB and WAL parts of RocksDB if you so wish:

ceph-disk prepare --bluestore /dev/sda --block.wal /dev/sdb --block.db /dev/sdb

The preceding code assumes that your data disk is /dev/sda. For this example, assume a spinning disk and you have a faster device such as SSD as /dev/sdb. Ceph-disk would create two partitions on the data disk: one for storing the actual Ceph objects and another small XFS partition for storing details about the OSD. It would also create two partitions for SSD for the DB and WAL. You can create...