Book Image

PostgreSQL High Performance Cookbook

By : Chitij Chauhan, Dinesh Kumar
Book Image

PostgreSQL High Performance Cookbook

By: Chitij Chauhan, Dinesh Kumar

Overview of this book

PostgreSQL is one of the most powerful and easy to use database management systems. It has strong support from the community and is being actively developed with a new release every year. PostgreSQL supports the most advanced features included in SQL standards. It also provides NoSQL capabilities and very rich data types and extensions. All of this makes PostgreSQL a very attractive solution in software systems. If you run a database, you want it to perform well and you want to be able to secure it. As the world’s most advanced open source database, PostgreSQL has unique built-in ways to achieve these goals. This book will show you a multitude of ways to enhance your database’s performance and give you insights into measuring and optimizing a PostgreSQL database to achieve better performance. This book is your one-stop guide to elevate your PostgreSQL knowledge to the next level. First, you’ll get familiarized with essential developer/administrator concepts such as load balancing, connection pooling, and distributing connections to multiple nodes. Next, you will explore memory optimization techniques before exploring the security controls offered by PostgreSQL. Then, you will move on to the essential database/server monitoring and replication strategies with PostgreSQL. Finally, you will learn about query processing algorithms.
Table of Contents (19 chapters)
PostgreSQL High Performance Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Disk benchmarking


In this recipe, we will be discussing how to benchmark the disk speed using open source tools.

Getting ready

The well-known command to perform disk I/O benchmarking is dd. We all use the dd command to measure read/write operations by specifying the required block size, and we also measure the direct I/O by skipping the system write buffers. Similarly, phoronix supports a complete test suite for the disk as CPU and memory that perform different storage-related tests. Another famous disk benchmarking tool is bonnie++, which provides more flexibility in measuring the disk I/O.

How to do it...

Let us discuss how to run the disk benchmarking using phoronix and using bonnie++ testing tools:

Phoronix

To run the complete disk test suite on the system, run the following command:

$ phoronix-test-suite benchmark pts/disk

Phoronix also supports a quick I/O test case, where you can perform an instant disk performance test using the following command test, which is interactive and collects the input, and then runs the test cases:

$ phoronix-test-suite benchmark pts/iozone
Phoronix Test Suite v6.8.0
    Installed: pts/iozone-1.8.0
Disk Test Configuration
        1: 4Kb
        2: 64Kb
        3: 1MB
        4: Test All Options
        Record Size: 1
      1: 512MB
    2: 2GB
    3: 4GB
    4: 8GB
    5: Test All Options
    File Size: 1
    1: Write Performance
    2: Read Performance
    3: Test All Options
    Disk Test: 3

bonnie++

bonnie++ is a filesystem and disk-level benchmarking tool and can perform the same test multiple times. You can install this tool using either yum or apt-get install or installing it via the source code. Let's run the bulk I/O test case using the following arguments, where it tries to create 8 GB files:

$ /usr/local/sbin/bonnie++ -D -d /tmp/ -s 8G -b
Writing with putc()...done
Writing intelligently...done
...
localhost.localdomain,8G,68996,106,14151,53,46772,15,95343,93,123633,16,201.0,7,16,795,58,+++++,+++,733,46,757,57,+++++,+++,592,38

How it works...

Let us discuss how the bonnie++ performs the benchmarking, and what are all the tools bonnie++ offers to understand the benchmarking results:

bonnie++

From the preceding test case, we provided the results the bonnie++ as to use only direct I/O using the -D option. Also, we asked to create 8 GB random files in the /tmp/ location to measure the disk speed. As the final output from bonnie++, we will get CSV values, which we need to feed to the bon_csv2html command, which provides some detailed information about the test results, as shown in the following screenshot:

$ echo "localhost.localdomain,8G,68996,106,14151,53,46772,15,95343,93,123633,16,201.0,7,16,795,58,+++++,+++,733,46,757,57,+++++,+++,592,38"|bon_csv2html > ~/Desktop/bonresults.html

bonnie++ performs three different tests for disk benchmarking. They are read, write and then seek speed. We will be discussing the seek rate in the further topics. The bonnie++ do always recommend to have high number in /sec section in the preceding table, and lower % CPU values for better disk performance. Also, ++++ shows that the test was not performed accurately by bonnie++, as the test was incomplete with the provided arguments. To get the complete results, we need to rerun the same test multiple times using the -n option, where bonnie will get enough time/resources to complete the job.