Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By : Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre
Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By: Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre

Overview of this book

This Learning Path takes you through the basics of Ceph all the way to gaining in-depth understanding of its advanced features. You’ll gather skills to plan, deploy, and manage your Ceph cluster. After an introduction to the Ceph architecture and its core projects, you’ll be able to set up a Ceph cluster and learn how to monitor its health, improve its performance, and troubleshoot any issues. By following the step-by-step approach of this Learning Path, you’ll learn how Ceph integrates with OpenStack, Glance, Manila, Swift, and Cinder. With knowledge of federated architecture and CephFS, you’ll use Calamari and VSM to monitor the Ceph environment. In the upcoming chapters, you’ll study the key areas of Ceph, including BlueStore, erasure coding, and cache tiering. More specifically, you’ll discover what they can do for your storage system. In the concluding chapters, you will develop applications that use Librados and distributed computations with shared object classes, and see how Ceph and its supporting infrastructure can be optimized. By the end of this Learning Path, you'll have the practical knowledge of operating Ceph in a production environment. This Learning Path includes content from the following Packt products: • Ceph Cookbook by Michael Hackett, Vikhyat Umrao and Karan Singh • Mastering Ceph by Nick Fisk • Learning Ceph, Second Edition by Anthony D'Atri, Vaibhav Bhembre and Karan Singh
Table of Contents (27 chapters)
Title Page
About Packt
Contributors
Preface
Index

Benchmarking the Ceph Block Device


The tools, rados bench, and RADOS load-gen, which we discussed in the last recipe, are used to benchmark the Ceph cluster pool. In this recipe, we will focus on benchmarking the Ceph Block Device with the rbd bench-write tool. The ceph rbd command-line interface provides an option known as bench-write, which is a tool to perform write benchmarking operations on the Ceph Rados Block Device.

How to do it...

To benchmark the Ceph Block Device, we need to create a block device and map it to the Ceph client node:

  1. Create a Ceph Block Device named block-device1, of size 10 G, and map it:
        # rbd create block-device1 --size 10240 --image-feature layering
        # rbd info --image block-device1
        # rbd map block-device1
        # rbd showmapped
  1. Create a filesystem on the block device and mount it:
        # mkfs.xfs /dev/rbd1
        # mkdir -p /mnt/ceph-block-device1
        # mount /dev/rbd1 /mnt/ceph-block-device1
        # df -h /mnt/ceph-block-device1...