Book Image

PostgreSQL High Performance Cookbook

By : Chitij Chauhan, Dinesh Kumar
Book Image

PostgreSQL High Performance Cookbook

By: Chitij Chauhan, Dinesh Kumar

Overview of this book

PostgreSQL is one of the most powerful and easy to use database management systems. It has strong support from the community and is being actively developed with a new release every year. PostgreSQL supports the most advanced features included in SQL standards. It also provides NoSQL capabilities and very rich data types and extensions. All of this makes PostgreSQL a very attractive solution in software systems. If you run a database, you want it to perform well and you want to be able to secure it. As the world’s most advanced open source database, PostgreSQL has unique built-in ways to achieve these goals. This book will show you a multitude of ways to enhance your database’s performance and give you insights into measuring and optimizing a PostgreSQL database to achieve better performance. This book is your one-stop guide to elevate your PostgreSQL knowledge to the next level. First, you’ll get familiarized with essential developer/administrator concepts such as load balancing, connection pooling, and distributing connections to multiple nodes. Next, you will explore memory optimization techniques before exploring the security controls offered by PostgreSQL. Then, you will move on to the essential database/server monitoring and replication strategies with PostgreSQL. Finally, you will learn about query processing algorithms.
Table of Contents (19 chapters)
PostgreSQL High Performance Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Identifying checkpoint overhead


In this recipe, we will be discussing the checkpoint process, which may cause more I/O usage while flushing the dirty buffers into the disk.

Getting ready

In PostgreSQL, the checkpoint is an activity that flushes all the dirty buffers from the shared buffers into the persistent storage, which is followed by updating the consistent transaction log sequence number in the data, WAL files. That means this checkpoint guarantees that all the information up to this checkpoint number is stored in a persistent disk storage. PostgreSQL internally issues the checkpoint as per the checkpoint_timeout and max_wal_size settings, which keep the data in a consistent storage as configured. In case the configured shared_buffers are huge in size, then the probability of holding dirty buffers will also be greater in size, which leads to more I/O usage for the next checkpoint process. It is also not recommended to tune the checkpoint_timeout and max_wal_size settings to be aggressive...