Book Image

Learning Ceph

By : Karan Singh
Book Image

Learning Ceph

By: Karan Singh

Overview of this book

<p>Ceph is an open source, software-defined storage solution, which runs on commodity hardware to provide exabyte-level scalability. It is well known to be a highly reliable storage system that has no single point of failure.</p> <p>This book will give you all the skills you need to plan, deploy, and effectively manage your Ceph cluster, guiding you through an overview of Ceph's technology, architecture, and components. With a step-by-step, tutorial-style explanation of the deployment of each Ceph component, the book will take you through Ceph storage provisioning and integration with OpenStack.</p> <p>You will then discover how to deploy and set up your Ceph cluster, discovering the various components and why we need them. This book takes you from a basic level of knowledge in Ceph to an expert understanding of its most advanced features.</p>
Table of Contents (18 chapters)
Learning Ceph
Credits
Foreword
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Bringing an OSD out and down from a Ceph cluster


Before proceeding with a cluster's size reduction or scaling it down, make sure the cluster has enough free space to accommodate all the data present on the node you are moving out. The cluster should not be at its near-to-full ratio.

From the ceph-client1 node, generate some load on the Ceph cluster. This is an optional step to demonstration on-the-fly scale-down operations of a Ceph cluster. Make sure the host running the VirtualBox environment has adequate disk space since we will write data to a Ceph cluster.

# dd if=/dev/zero of=/mnt/ceph-vol1/file1 count=3000 bs=1M

As we need to scale down the cluster, we will remove ceph-node4 and all of its associated OSDs out of the cluster. Ceph OSDs should be set out so that Ceph can perform data recovery. From any of the Ceph nodes, take the OSDs out of the cluster:

# ceph osd out osd.9
# ceph osd out osd.10
# ceph osd out osd.11

As soon as you mark OSDs out of the cluster, Ceph will start rebalancing...