Book Image

PostgreSQL High Performance Cookbook

By : Chitij Chauhan, Dinesh Kumar
Book Image

PostgreSQL High Performance Cookbook

By: Chitij Chauhan, Dinesh Kumar

Overview of this book

PostgreSQL is one of the most powerful and easy to use database management systems. It has strong support from the community and is being actively developed with a new release every year. PostgreSQL supports the most advanced features included in SQL standards. It also provides NoSQL capabilities and very rich data types and extensions. All of this makes PostgreSQL a very attractive solution in software systems. If you run a database, you want it to perform well and you want to be able to secure it. As the world’s most advanced open source database, PostgreSQL has unique built-in ways to achieve these goals. This book will show you a multitude of ways to enhance your database’s performance and give you insights into measuring and optimizing a PostgreSQL database to achieve better performance. This book is your one-stop guide to elevate your PostgreSQL knowledge to the next level. First, you’ll get familiarized with essential developer/administrator concepts such as load balancing, connection pooling, and distributing connections to multiple nodes. Next, you will explore memory optimization techniques before exploring the security controls offered by PostgreSQL. Then, you will move on to the essential database/server monitoring and replication strategies with PostgreSQL. Finally, you will learn about query processing algorithms.
Table of Contents (19 chapters)
PostgreSQL High Performance Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Storage sizing


In this recipe, we will be discussing how to estimate disk growth using the pgbench tool.

Getting ready

One of the best practices to predict the database disk storage capacity is by loading a set of sample data into the application's database, and simulating production kind of actions using pgbench over a long period. For a period of time (every 1 hour), let's collect the database size using pg_database_size() or any native command, which returns the disk usage information. Once we get the periodic intervals for at least 24 hours, then we can find an average disk growth ratio by calculating the average of delta among each interval value.

How to do it...

Prepare the SQL script as follows, which simulates the live application behavior in the database:

Create connection;            --- Create/Use pool connection.
INSERT operation              --- Initial write operation.
SELECT pg_sleep(0.01);        --- Some application code runs here, and waiting for the next query.
UPDATE operation              --- Update other tables for the newly inserted records.
SELECT pg_sleep(0.1);         --- Updating other services which shows the live graphs on the updated records.
DELETE operation              --- Delete or purge any unnecessary data.
SELECT pg_sleep(0.01);        --- Some application code overhead.

Let's run the following pgbench test case, with the preceding test file for 24 hours:

$ pgbench -T 86400 -f <script location> -c <number of concurrent connections>

In parallel, let's schedule a job that collects the database size every hour using the pg_database_size() function, also schedule another job to run for every 10 minutes, which run the VACUUM on the database. This VACUUM job takes care of reclaiming the dead tuples logically at database level. However, in production servers, we will not deploy the VACUUM job to run for every 10 minutes, as the autovacuum process takes care of the dead tuples. As this test is not for database performance benchmarking, we can also make autovacuum more aggressive on the database side as well.

How it works...

Once we find the average disk growth per day, we can predict the database growth for the next 1 or 2 years. However, the database write rate also increases with the business growth. So, we need to deploy the database growth script or we need to analyze any disk storage trends from the monitoring tool to make a better prediction of the storage size.