Book Image

Apache Ignite Quick Start Guide

By : Sujoy Acharya
Book Image

Apache Ignite Quick Start Guide

By: Sujoy Acharya

Overview of this book

Apache Ignite is a distributed in-memory platform designed to scale and process large volume of data. It can be integrated with microservices as well as monolithic systems, and can be used as a scalable, highly available and performant deployment platform for microservices. This book will teach you to use Apache Ignite for building a high-performance, scalable, highly available system architecture with data integrity. The book takes you through the basics of Apache Ignite and in-memory technologies. You will learn about installation and clustering Ignite nodes, caching topologies, and various caching strategies, such as cache aside, read and write through, and write behind. Next, you will delve into detailed aspects of Ignite’s data grid: web session clustering and querying data. You will learn how to process large volumes of data using compute grid and Ignite’s map-reduce and executor service. You will learn about the memory architecture of Apache Ignite and monitoring memory and caches. You will use Ignite for complex event processing, event streaming, and the time-series predictions of opportunities and threats. Additionally, you will go through off-heap and on-heap caching, swapping, and native and Spring framework integration with Apache Ignite. By the end of this book, you will be confident with all the features of Apache Ignite 2.x that can be used to build a high-performance system architecture.
Table of Contents (9 chapters)

Web session clustering

Traditional web applications can scale out and handle user loads using a load balancer. A load balancer can be configured to route the user requests to the next available server. Various algorithms are there to route the user requests, such as by busyness, round robin, and so on.

The following diagram depicts the traditional web application topology with a load balancer. The user requests are intercepted by a router/load balancer, the balancer knows which web server can handle the user request, and it routes the user to that server. The user loads are evenly distributed across the cluster nodes, so if you add more servers, then the cluster can handle more user requests:

However, this introduces a new problem for stateful applications that store web sessions. The user sessions are stored in a web/app server, so the Load Balancer must ensure that requests...