At this point, we have a running Ceph cluster with one MON and three OSDs configured on ceph-node1
. Now we will scale up the cluster by adding ceph-node2
and ceph-node3
as MON and OSD nodes.
A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. You will notice that your Ceph cluster is currently showing HEALTH_WARN
; this is because we have not configured any OSDs other than ceph-node1
. By default, the data in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes.
Since we already have one monitor running on ceph-node1
, let's create two more monitors for our Ceph cluster and configure OSDs on ceph-node2
and ceph-node3
:
- Update the Ceph hosts
ceph-node2
andceph-node3
to/etc/ansible/hosts
:
- Verify that Ansible can reach the Ceph hosts mentioned in
/etc/ansible/hosts
:
- Run Ansible playbook in order to scale up the Ceph cluster on
ceph-node2
andceph-node3
:
Once playbook completes the ceph cluster scaleout job and plays the recap with failed=0
, it means that the Ceph ansible has deployed more Ceph daemons in the cluster, as shown in the following screenshot.
You have three more OSD daemons and one more monitor daemon running in ceph-node2
and three more OSD daemons and one more monitor daemon running in ceph-node3
. Now you have total nine OSD daemons and three monitor daemons running on three nodes:
- We were getting a
too few PGs per OSD
warning and because of that, we increased the default RBD pool PGs from 64 to 128. Check the status of your Ceph cluster; at this stage, your cluster is healthy. PGs - placement groups are covered in detail in Chapter 8, Ceph Under the Hood.