Now that we have a working Ceph cluster at our disposal we can do all manner of cool things with it. We can add new OSD or mon nodes—or remove existing ones. We can add multiple clients and test performing simultaneous operations against the cluster. We can also add different types of daemons as separate VMs. In short, we can scale the cluster up or scale it down at will. In this section, we will show how easy it is to manipulate the cluster using variables in our configuration files.
Let's revisit the vagrant_variables.yml
file that we previously edited to adjust the number of nodes before bootstrapping our cluster. We will tweak the numbers in this file to scale as we wish. Open this file in your favorite editor. Before making changes the variables reflecting your existing nodes should look like this:
mon_vms: 1 osd_vms: 3 mds_vms: 0 rgw_vms: 0 rbd_mirror_vms: 0 client_vms: 1 iscsi_gw_vms: 0 mgr_vms: 0
We know that one instance of a mon node is a single point of...