Book Image

Redis Essentials

Book Image

Redis Essentials

Overview of this book

Redis is the most popular in-memory key-value data store. It's very lightweight and its data types give it an edge over the other competitors. If you need an in-memory database or a high-performance cache system that is simple to use and highly scalable, Redis is what you need. Redis Essentials is a fast-paced guide that teaches the fundamentals on data types, explains how to manage data through commands, and shares experiences from big players in the industry. We start off by explaining the basics of Redis followed by the various data types such as Strings, hashes, lists, and more. Next, Common pitfalls for various scenarios are described, followed by solutions to ensure you do not fall into common traps. After this, major differences between client implementations in PHP, Python, and Ruby are presented. Next, you will learn how to extend Redis with Lua, get to know security techniques such as basic authorization, firewall rules, and SSL encryption, and discover how to use Twemproxy, Redis Sentinel, and Redis Cluster to scale infrastructures horizontally. At the end of this book, you will be able to utilize all the essential features of Redis to optimize your project's performance.
Table of Contents (17 chapters)
Redis Essentials
Credits
About the Authors
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
5
Clients for Your Favorite Language (Become a Redis Polyglot)
Index

Optimizing with Hashes


The previous time series implementation uses one Redis key for each second, minute, hour, and day. In a scenario where an event is inserted every second, there will be 87,865 keys in Redis in a full day (assuming a day starts at 00:00:00):

  • 86,400 keys for the 1sec granularity (60 * 60 * 24).

  • 1,440 keys for the 1min granularity (60 * 24).

  • 24 keys for the 1hour granularity (24 * 1).

  • 1 key for the 1day granularity.

This is an enormous number of keys per day, and this number grows linearly over time. A large number of keys is not very good for debugging, and each key has a memory cost that comes with it. In a benchmark test that we did—in which we inserted one event per second for 24 hours (86,400 events)—Redis allocated about 11 MB.

We can optimize this solution by using Hashes instead of Strings. Small Hashes are encoded in a different data structure, called a ziplist. This structure is memory-optimized. There are two conditions for a Hash to be encoded as a ziplist and both...