We now know a little more about how Hazelcast apportions data into partitions, how these partitions are automatically assigned to a node or a partition group, and how we might configure these to fulfill our needs. We have also investigated how it deals with issues, be it failure of an individual node or a group of nodes within a defined silo, and how we can recover it to restore resilience. Finally, we saw how an underlying network fabric issue that creates a network split-brain is dealt with and how we are able to efficiently bring multiple sides of the split back together and return to normal service.
Now that we have seen how things work behind the scenes to manage and distribute our data, we might need our application to know about some of these goings-on. In the next chapter, we shall look at how an application can register its interest to be notified as things happen to support the cluster.