Redis was initially designed to be very lightweight and fast. Previously, the only topology available for anyone using Redis was master/slave, in which the master receives all the writes and replicates the changes to the slave (or slaves). This happens without any sort of automatic failover or data sharding. This topology works well in many scenarios, such as when:
The master has enough memory to store all of the data that you need
More slaves can be added to scale reads better or when network bandwidth is a problem (the total read volume is higher than the hardware capability)
It is acceptable to stop your application when maintenance is required on the master machine
Data redundancy through slaves is enough
But it does not work well in other scenarios, such as when:
The dataset is bigger than the available memory in the master Redis instance
A given application cannot be stopped when there are issues with the master instance...