Book Image

PostgreSQL High Performance Cookbook

By : Chitij Chauhan, Dinesh Kumar
Book Image

PostgreSQL High Performance Cookbook

By: Chitij Chauhan, Dinesh Kumar

Overview of this book

PostgreSQL is one of the most powerful and easy to use database management systems. It has strong support from the community and is being actively developed with a new release every year. PostgreSQL supports the most advanced features included in SQL standards. It also provides NoSQL capabilities and very rich data types and extensions. All of this makes PostgreSQL a very attractive solution in software systems. If you run a database, you want it to perform well and you want to be able to secure it. As the world’s most advanced open source database, PostgreSQL has unique built-in ways to achieve these goals. This book will show you a multitude of ways to enhance your database’s performance and give you insights into measuring and optimizing a PostgreSQL database to achieve better performance. This book is your one-stop guide to elevate your PostgreSQL knowledge to the next level. First, you’ll get familiarized with essential developer/administrator concepts such as load balancing, connection pooling, and distributing connections to multiple nodes. Next, you will explore memory optimization techniques before exploring the security controls offered by PostgreSQL. Then, you will move on to the essential database/server monitoring and replication strategies with PostgreSQL. Finally, you will learn about query processing algorithms.
Table of Contents (19 chapters)
PostgreSQL High Performance Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Memory benchmarking


In this recipe, we will be discussing how to benchmark the memory speed using open source tools.

Getting ready

As with the CPU test suite, phoronix supports one another memory test suite, which covers RAM benchmarking. Otherwise, we can also use a dedicated memtest86 benchmarking tool, which performs memory benchmarking during a server bootup phase. Another neat trick would be to create a tmpfs mount point in the RAM and then create a tablespace on it in PostgreSQL. Once we create the tablespace, we can then create in-memory tables, where we can benchmark the table read/write operations. We can also use the dd command to measure the memory read/write operations.

How to do it...

Let us discuss how to install phoronix and how to configure the tmpfs mount point in Linux:

Phoronix

Let's execute the following phoronix command, which will install the memory test suit and perform memory benchmarking. Once the benchmarking is completed, as aforementioned, observe the HTML report:

$ phoronix-test-suite benchmark pts/memory
Phoronix Test Suite v6.8.0
  Installed: pts/ramspeed-1.4.0
To Install: pts/stream-1.3.1
To Install: pts/cachebench-1.0.0

tmpfs

In Linux, tmpfs is a temporary filesystem, which uses the RAM rather than the disk storage. Anything we store in tmpfs will be cleared once we restart the system:

Note

Refer to the URL for more information about tmpfs: https://en.wikipedia.org/wiki/Tmpfs and https://www.jamescoyle.net/knowledge/1659-what-is-tmpfs.

Let's create a new mount point based on tmpfs using the following command:

# mkdir -p /memmount
# mount -t tmpfs -o size=1g tmpfs /memmount
# df -kh -t tmpfs
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.9G   96K  1.9G   1% /dev/shm
tmpfs           1.9G  8.9M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs           1.0G     0  1.0G   0% /memmount

Let's create a new folder in memmount and assign it to the tablespace.

# mkdir -p /memmount/memtabspace
# chown -R postgres:postgres /memmount/memtabspace/
postgres=# CREATE TABLESPACE memtbs LOCATION '/memmount/memtabspace';
CREATE TABLESPACE
postgres=# CREATE TABLE memtable(t INT) TABLESPACE memtbs;
CREATE TABLE
Write test
postgres=# INSERT INTO memtable VALUES(generate_series(1, 1000000));

INSERT 0 1000000
Time: 1372.763 ms


postgres=# SELECT 
pg_size_pretty(pg_relation_size('memtable'::regclass));
 
 pg_size_pretty
----------------
 35 MB
(1 row)

From the preceding results, to insert 1 million records it took approximately 1 second with a writing speed of 35 MB per second.

Read test
postgres=# SELECT COUNT(*) FROM memtable;
count
---------
1000000
(1 row)
Time: 87.333 ms

From the preceding results, to read the 1 million records it took approximately 90 milliseconds with a reading speed of 385 MB per second, which is pretty fast for the local system configuration. The preceding read test was performed after clearing the system cache and by restarting the PostgreQSL instance, which avoids the system buffers.

How it works...

In the preceding tmpfs example, we created an in-memory table, and all the system calls PostgreQSL tries to perform to read/write the data will be directly affecting the memory rather than the disk, which gives a major performance boost. Also, we need to consider to drop these in-memory tablespace, tables after testing, since these objects will physically vanish after system reboot.