As we have seen, we can have one or more clients participating on a transaction that issues read and write operations to a cache. When transactions execute concurrently on the same cache entry, the interleaved execution of their reads and writes by the cache can produce undesirable results. On the other hand, a good cache solution must guarantee consistency of data while allowing multiple transactions to read/write data concurrently.
Concurrency control is an activity that is important to avoid unexpected results; its primary goal is to ensure that all transactions will have the same effect as a serial one.
Traditionally, the problem of concurrency is solved providing transaction isolation, keeping a single version of the data and locking other requests to manage concurrency.
Locking is essential to avoid change collisions resulting from simultaneous updates to the same cache entry by two or more concurrent users.
If locking is not available and several users...