One of the primary advantages of technologies like Hazelcast is the distributed nature of their data persistence; by fragmenting and scattering the held data across many diverse nodes we can achieve high levels of reliability, scalability, and performance. In this chapter we will investigate:
How data is split into partitions
How that data is backed up within the overall cluster
Replicating backups; synchronous versus asynchronous
Trade-off between read performance and consistency
How to silo groups of nodes together
How we can manage network partitioning (split brain syndrome)