Book Image

Apache Ignite Quick Start Guide

By : Sujoy Acharya
Book Image

Apache Ignite Quick Start Guide

By: Sujoy Acharya

Overview of this book

Apache Ignite is a distributed in-memory platform designed to scale and process large volume of data. It can be integrated with microservices as well as monolithic systems, and can be used as a scalable, highly available and performant deployment platform for microservices. This book will teach you to use Apache Ignite for building a high-performance, scalable, highly available system architecture with data integrity. The book takes you through the basics of Apache Ignite and in-memory technologies. You will learn about installation and clustering Ignite nodes, caching topologies, and various caching strategies, such as cache aside, read and write through, and write behind. Next, you will delve into detailed aspects of Ignite’s data grid: web session clustering and querying data. You will learn how to process large volumes of data using compute grid and Ignite’s map-reduce and executor service. You will learn about the memory architecture of Apache Ignite and monitoring memory and caches. You will use Ignite for complex event processing, event streaming, and the time-series predictions of opportunities and threats. Additionally, you will go through off-heap and on-heap caching, swapping, and native and Spring framework integration with Apache Ignite. By the end of this book, you will be confident with all the features of Apache Ignite 2.x that can be used to build a high-performance system architecture.
Table of Contents (9 chapters)

Summary

In this chapter, we looked at the Apache Ignite caching architecture. We started by explaining the CAP theorem for distributed data stores and how Apache Ignite can be tuned to use AP or CP. Apache Ignite has an efficient clustering API; we looked at the various clustering configurations such as node discovery, node deployment, and node grouping.

This chapter also explored the caching modes used to distribute data. We looked at local, partitioned, and replicated cache modes and how scalability, availability, and read/write performance can be tuned using the caching modes.

This chapter concluded with the caching strategies: cache aside, read-through and write-through, and write-behind. The next chapter will explain the data grid concept and explore the web session clustering technique.