Book Image

PostgreSQL High Performance Cookbook

By : Chitij Chauhan, Dinesh Kumar
Book Image

PostgreSQL High Performance Cookbook

By: Chitij Chauhan, Dinesh Kumar

Overview of this book

PostgreSQL is one of the most powerful and easy to use database management systems. It has strong support from the community and is being actively developed with a new release every year. PostgreSQL supports the most advanced features included in SQL standards. It also provides NoSQL capabilities and very rich data types and extensions. All of this makes PostgreSQL a very attractive solution in software systems. If you run a database, you want it to perform well and you want to be able to secure it. As the world’s most advanced open source database, PostgreSQL has unique built-in ways to achieve these goals. This book will show you a multitude of ways to enhance your database’s performance and give you insights into measuring and optimizing a PostgreSQL database to achieve better performance. This book is your one-stop guide to elevate your PostgreSQL knowledge to the next level. First, you’ll get familiarized with essential developer/administrator concepts such as load balancing, connection pooling, and distributing connections to multiple nodes. Next, you will explore memory optimization techniques before exploring the security controls offered by PostgreSQL. Then, you will move on to the essential database/server monitoring and replication strategies with PostgreSQL. Finally, you will learn about query processing algorithms.
Table of Contents (19 chapters)
PostgreSQL High Performance Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Running sequential scans


In this recipe, we will be discussing sequential scans.

Getting ready

Sequential scans are a mechanism, and PostgreSQL tries to read each tuple from the relation. The best example for the sequential scan is reading an entire table without any predicate. Sequential scans are always preferred over index scans, when a query is reading most of the data from the table, which will avoid the index lookup overhead.

Reading pages in sequential order takes less effort when compared with reading pages in random order. This is because, in sequential file reading, we do not need to set the file pointer to any specific location. However, during the index scan, PostgreSQL needs to read random pages from the file as per the index results. That is, during the index scan we move the file read pointer to multiple pages. This is the reason why the arbitrary cost parameter seq_page_cost value 1 is always less than the random_page_cost value 4.

How to do it…

  1. Let's run a query that reads the...