Book Image

Instant Redis Optimization How-to

By : Arun Chinnachamy
Book Image

Instant Redis Optimization How-to

By: Arun Chinnachamy

Overview of this book

The database is the backbone of any application and Redis is a next generation NoSQL database which can provide ultra-fast performance if tuned and calibrated correctly. Instant Redis Optimization How-to will show you how to leverage Redis in your application for greater performance and scalability. Instant Redis Optimization How-to will show you how to make the most of Redis. Using real-world examples of Redis as a caching and queuing service, you will learn how to install and calibrate Redis to optimize memory usage, read and write speed, as well as bulk writes and transactions. If you want to use Redis for its blazing fast capabilities, then this book is for you.Instant Redis Optimization How-to shows you how to optimize and scale Redis with practical recipes on installation and calibration for performance and memory optimization as well as advanced features like PUB/SUB. This book starts by providing clear instructions on how to install and fine-tune Redis to work efficiently in your application stack. You will also learn how to maintain persistence, how to optimize Redis to handle different data types, as well as memory usage optimization. You will then learn how to use bulk writes and transactions, as well as publish/subscribe features to get the most out of Redis. Offering best practices and troubleshooting tips, this book will also show you how to manage and maintain performance. This book finishes by recommending the best client libraries for all major programming languages. By the end of this book you will know how to create blazing fast applications using Redis.
Table of Contents (7 chapters)

Detecting performance bottlenecks (Intermediate)


Redis is now well known for its performance. However, its performance depends not only on the server but is also influenced by the environment in which it is running. There are various factors that could affect the performance of Redis. The factors range from server setup and hardware to memory management by the operating system. In this recipe, let us discuss a few things that are to be considered to get the maximum out of Redis.

Getting ready

The major bottlenecks that need to be considered are:

  • Network and communication

  • RAM

  • I/O operations

By considering the guidelines that follow, we can reduce these latencies and get better performance from the Redis instance.

How to do it...

  • Use Unix domain sockets for clients connecting from a local host.

  • Use command pipelining whenever possible. More information about pipelining will be provided in the next recipe.

  • Avoid using virtual instances for running Redis. Always run Redis in a physical machine to get better performance.

  • Always make sure we have enough RAM to accommodate the whole data set and the spikes, during save operations. Avoid data overflowing into the swap partition.

  • Consider using faster disk I/O devices for saving RDB and AOF files.

  • As the connections to the server are TCP-based, in order to reduce overhead, it is advised to keep the connection open as long as possible rather than opening and closing connections multiple times from the same script.

  • It is advised to dedicate a CPU core to a Redis instance; but if it is not configured correctly, it could lead to bad performance.

How it works...

In most scenarios, the clients will connect to the Redis instance over a network through a TCP/IP connection. So, the total time taken for a request to complete is calculated by:

Total time for request completion = network latency to reach server + time taken by Redis to process and respond + network latency for response to come back

Even if Redis can process in a few microseconds, the low performance will be recorded because of the multiple roundtrips made to the Redis server. For example, let us consider that two clients are connecting to the same Redis server. One of the clients is connecting from a remote system from where an average roundtrip takes around 150 ms, and another client is connecting to the server from the local host through the Unix domain socket. Assume that in both cases Redis can complete the operation in 2 ms tops.

For the first client, the time taken to perform 1,000 individual commands will be:

Time taken = 1000 x (100 ms + 2 ms) = 102000 ms = 102 seconds.

For the second client, which is connecting from a local host, the latency can be as low as 30 microseconds, or in the worst-case scenario, say, 100 microseconds.

Time taken = 1000 x (100 microseconds + 2 ms) = 2100 ms = 2.1 seconds.

So, the longer it takes a request to reach the server, the longer it takes to complete the request. Apart from the network latency, more latency is added by the operating system for packet queuing. The latency gets aggravated in the case of virtual machines when compared to the physical machines, due to the extra level of networking.

So by using sockets, pipelining commands, and reducing networking layers, we can achieve better performance from the same Redis instance.

As mentioned earlier, Redis needs to keep the complete data set in the memory to work. In the case of larger data sets, it is common for the system to run out of physical RAM when lots of other processes are also running on the same machine. To free some physical RAM, the server will start swapping.

Note

Paging is a memory-management scheme that allows the operating system to use the disk as a swap or secondary memory when the RAM cannot hold all the data. This virtual memory implementation is important for the working of all operating systems.

To make room for other processes in the RAM, the operating system swaps the memory block between the physical disk and RAM. In case the system is running out of physical memory, it takes the memory block of Redis and swaps it to the disk. When Redis needs to access that specific memory block, the server needs to swap the paging file back to the physical RAM before Redis can access it. When this happens, Redis's performance goes down drastically. To prevent this, keep monitoring the RAM usage of Redis and install two times more RAM than the size of the data set.

Tip

Consider using the maxmemory option to limit the amount of RAM used for caching purposes.

Persistence is not perfect in Redis and it comes with a few drawbacks of its own. As discussed in the previous recipe, Persistence in Redis, both the persistence modes in Redis, Snapshotting and AOF, have to fork a child process to generate an RDB file and rewrite an AOF file respectively. Redis uses copy-on-write forking, letting both parent and child processes share the common memory pages. The memory pages are duplicated only when any change happens in the parent or child process. As the fork operation is initiated by a parent process, it could cause some latency. If the disk resources are shared with other processes and if any other process is performing disk I/O, the performance can deteriorate considerably.

If the AOF is configured with fsync always, it will create more disk I/Os that in turn translate into more latency in the system. This latency in the disk can be minimized by avoiding other processes performing I/O in the same system.

It is recommended to use Solid State Disk (SSD) for AOF and RDB files, which helps in decreasing the disk latency, or dedicate a disk only for Redis.

There's more...

Apart from the network, RAM, and disk I/O, there are a couple of other factors that may affect the performance of Redis.

CPU bottleneck

Redis is a single-threaded application. A single thread serves all the requests from the clients. In cases where multiple simultaneous requests are received, the requests are queued and processed sequentially. This might look like a bad idea, as requests may take longer to be processed than expected, but Redis is not perceived as slow due to the very little time taken to complete a single request, and the thread does not get blocked in any I/O operations as all the I/O operations are performed by the forked child processes.

Due to the single-threaded architecture, even when provided with multiple cores, Redis cannot leverage them. So Redis likes processors with larger caches and is neutral towards multiple cores. There is very little chance of the CPU becoming the bottleneck, as Redis is usually memory- or network-bound.

But to make use of multiple cores, we can start multiple Redis instances in the same server using different ports and treating them as different servers. Due to the low memory footprint of Redis (approximately 1 MB per instance), we can run multiple instances without any serious load to the memory.

Latency due to the application's design

Apart from the server setup and persistence, even the application's design can affect the performance of Redis. For example:

  • Making Redis write logs at the debug level creates serious performance issues. In the production environment, make sure the log level is set to notice or warning.

  • Slow commands can also affect the performance of Redis. Latency is created by complex commands too. As all the requests in Redis are served using a single thread, any command that takes longer increases the response time for other commands. Though basic commands take very little time, performing sorting, union, or intersection between two large sets will take a while. The SLOW LOG command needs to be monitored and optimized.

  • Redis provides a mechanism for auto-expiring keys in its data set. When inserting a key, expiry time for the key can also be mentioned. When the expiry time is reached, Redis destroys and flushes the key. An expiry cycle in Redis runs every 100 milliseconds. This needs additional processing to make sure too much memory is not used by keys that are already expired. One of the sources of latency can be too many keys expiring at the same time.