Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By : Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre
Book Image

Ceph: Designing and Implementing Scalable Storage Systems

By: Michael Hackett, Vikhyat Umrao, Karan Singh, Nick Fisk, Anthony D'Atri, Vaibhav Bhembre

Overview of this book

This Learning Path takes you through the basics of Ceph all the way to gaining in-depth understanding of its advanced features. You’ll gather skills to plan, deploy, and manage your Ceph cluster. After an introduction to the Ceph architecture and its core projects, you’ll be able to set up a Ceph cluster and learn how to monitor its health, improve its performance, and troubleshoot any issues. By following the step-by-step approach of this Learning Path, you’ll learn how Ceph integrates with OpenStack, Glance, Manila, Swift, and Cinder. With knowledge of federated architecture and CephFS, you’ll use Calamari and VSM to monitor the Ceph environment. In the upcoming chapters, you’ll study the key areas of Ceph, including BlueStore, erasure coding, and cache tiering. More specifically, you’ll discover what they can do for your storage system. In the concluding chapters, you will develop applications that use Librados and distributed computations with shared object classes, and see how Ceph and its supporting infrastructure can be optimized. By the end of this Learning Path, you'll have the practical knowledge of operating Ceph in a production environment. This Learning Path includes content from the following Packt products: • Ceph Cookbook by Michael Hackett, Vikhyat Umrao and Karan Singh • Mastering Ceph by Nick Fisk • Learning Ceph, Second Edition by Anthony D'Atri, Vaibhav Bhembre and Karan Singh
Table of Contents (27 chapters)
Title Page
About Packt
Contributors
Preface
Index

RADOS load-gen


A bit similar to the rados bench, RADOS load-gen is another interesting tool provided by Ceph, which runs out-of-the-box. As the name suggests, the RADOS load-gen tool can be used to generate load on a Ceph cluster and can be useful to simulate high load scenarios.

How to do it...

  1. Let's try to generate some load on our Ceph cluster with the following command:
        # rados -p rbd load-gen --num-objects 50 --min-object-size 4M 
        --max-object-size 4M --max-ops 16 --min-op-len 4M --max-op-len 4M 
        --percent 5 --target-throughput 2000 --run-length 60

How it works...

The syntax for RADOS load-gen is as follows:

# rados -p <pool-name> load-gen

Following is the detailed explanation of preceding command:

  • --num-objects: The total number of objects
  • --min-object-size: The minimum object size in bytes
  • --max-object-size: The maximum object size in bytes
  • --min-ops: The minimum number of operations
  • --max-ops: The maximum number of operations
  • --min-op-len: The minimum operation length...