Book Image

Redis Stack for Application Modernization

By : Luigi Fugaro, Mirko Ortensi
1 (1)
Book Image

Redis Stack for Application Modernization

1 (1)
By: Luigi Fugaro, Mirko Ortensi

Overview of this book

In modern applications, efficiency in both operational and analytical aspects is paramount, demanding predictable performance across varied workloads. This book introduces you to Redis Stack, an extension of Redis and guides you through its broad data modeling capabilities. With practical examples of real-time queries and searches, you’ll explore Redis Stack’s new approach to providing a rich data modeling experience all within the same database server. You’ll learn how to model and search your data in the JSON and hash data types and work with features such as vector similarity search, which adds semantic search capabilities to your applications to search for similar texts, images, or audio files. The book also shows you how to use the probabilistic Bloom filters to efficiently resolve recurrent big data problems. As you uncover the strengths of Redis Stack as a data platform, you’ll explore use cases for managing database events and leveraging introduce stream processing features. Finally, you’ll see how Redis Stack seamlessly integrates into microservices architectures, completing the picture. By the end of this book, you’ll be equipped with best practices for administering and managing the server, ensuring scalability, high availability, data integrity, stored functions, and more.
Table of Contents (18 chapters)
1
Part 1: Introduction to Redis Stack
6
Part 2: Data Modeling
11
Part 3: From Development to Production

Compaction rules for Time Series

In Redis Stack for Time Series, a compaction rule is a mechanism used to downsample data points and reduce data storage requirements over time. As time-series data grows and accumulates, it often becomes less important to store high-resolution data for older timestamps. Compaction rules help to maintain a balance between data storage and resolution requirements.

A compaction rule is a user-defined policy that dictates how the data points should be aggregated over a given time period (e.g., every minute, hour, or day) and retained in a downsampled series. The rule can specify the aggregation method, such as average, minimum, maximum, sum, or count, among the others described in the Aggregation framework section of this chapter.

For example, you can set up a compaction rule to downsample data every 5 minutes using the average aggregation function. This rule would create a new time series key where each data point represents the average value of...