Book Image

Getting Started with Hazelcast

By : Matthew Johns
Book Image

Getting Started with Hazelcast

By: Matthew Johns

Overview of this book

Table of Contents (18 chapters)
Getting Started with Hazelcast
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Moving to a new ground


So far we have been talking mostly about simple persisted data and caches, but in reality, we should not think of Hazelcast as purely a cache, as it is much more powerful than just that. It is an in-memory data grid that supports a number of distributed collections and features. We can load in data from various sources into differing structures, send messages across the cluster, take out locks to guard against concurrent activity, and listen to the goings on inside the workings of the cluster. Most of these implementations correspond to a standard Java collection, or function in a manner comparable to other similar technologies, but all with the distribution and resilience capabilities already built in.

  • Standard utility collections

    • Map: Key-value pairs

    • List: Collection of objects

    • Set: Non-duplicated collection

    • Queue: Offer/poll FIFO collection

  • Specialized collection

    • Multi-Map: Key-list of values collection

  • Lock: Cluster wide mutex

  • Topic: Publish/subscribe messaging

  • Concurrency utilities

    • AtomicNumber: Cluster-wide atomic counter

    • IdGenerator: Cluster-wide unique identifier generation

    • Semaphore: Concurrency limitation

    • CountdownLatch: Concurrent activity gate-keeping

  • Listeners: Application notifications as things happen

In addition to data storage collections, Hazelcast also features a distributed executor service allowing runnable tasks to be created that can be run anywhere on the cluster to obtain, manipulate, and store results. We could have a number of collections containing source data, then spin up a number of tasks to process the disparate data (for example, averaging or aggregating) and outputting the results into another collection for consumption.

Again, just as we could scale up our data capacities by adding more nodes, we can also increase the execution capacity in exactly the same way. This essentially means that by building our data layer around Hazelcast, if our application needs rapidly increase, we can continuously increase the number of nodes to satisfy seemingly extensive demands, all without having to redesign or re-architect the actual application.