Book Image

Mastering Ceph - Second Edition

By : Nick Fisk
Book Image

Mastering Ceph - Second Edition

By: Nick Fisk

Overview of this book

Ceph is an open source distributed storage system that is scalable to Exabyte deployments. This second edition of Mastering Ceph takes you a step closer to becoming an expert on Ceph. You’ll get started by understanding the design goals and planning steps that should be undertaken to ensure successful deployments. In the next sections, you’ll be guided through setting up and deploying the Ceph cluster with the help of orchestration tools. This will allow you to witness Ceph’s scalability, erasure coding (data protective) mechanism, and automated data backup features on multiple servers. You’ll then discover more about the key areas of Ceph including BlueStore, erasure coding and cache tiering with the help of examples. Next, you’ll also learn some of the ways to export Ceph into non-native environments and understand some of the pitfalls that you may encounter. The book features a section on tuning that will take you through the process of optimizing both Ceph and its supporting infrastructure. You’ll also learn to develop applications, which use Librados and distributed computations with shared object classes. Toward the concluding chapters, you’ll learn to troubleshoot issues and handle various scenarios where Ceph is not likely to recover on its own. By the end of this book, you’ll be able to master storage management with Ceph and generate solutions for managing your infrastructure.
Table of Contents (18 chapters)
Free Chapter
1
Section 1: Planning And Deployment
6
Section 2: Operating and Tuning
13
Section 3: Troubleshooting and Recovery

Latency

When running benchmarks to test the performance of a Ceph cluster, you are ultimately measuring the result of latency. All other forms of benchmarking metrics, including IOPS, MBps, or even higher-level application metrics, are derived from the latency of that request.

IOPS are the number of I/O requests done in a second; the latency of each request directly effects the possible IOPS and can be calculated using this formula:

An average latency of 2 milliseconds per request will result in roughly 500 IOPS, assuming each request is submitted in a synchronous fashion:

1/0.002 = 500

MBps is simply the number of IOPS multiplied by the I/O size:

500 IOPS * 64 KB = 32,000 KBps

When you are carrying out benchmarks, you are actually measuring the end result of a latency. Therefore, any tuning that you are carrying out should be done to reduce end-to-end latency for each I/O request...