Book Image

Mastering PostgreSQL 9.6

By : Hans-Jürgen Schönig
Book Image

Mastering PostgreSQL 9.6

By: Hans-Jürgen Schönig

Overview of this book

PostgreSQL is an open source database used for handling large datasets (Big Data) and as a JSON document database. It also has applications in the software and web domains. This book will enable you to build better PostgreSQL applications and administer databases more efficiently. We begin by explaining the advanced database design concepts in PostgreSQL 9.6, along with indexing and query optimization. You will also see how to work with event triggers and perform concurrent transactions and table partitioning, along with exploring SQL and server tuning. We will walk you through implementing advanced administrative tasks such as server maintenance and monitoring, replication, recovery and high availability, and much more. You will understand the common and not-so-common troubleshooting problems and how you can overcome them. By the end of this book, you will have an expert-level command of the advanced database functionalities and will be able to implement advanced administrative tasks with PostgreSQL.
Table of Contents (14 chapters)
Free Chapter
1
PostgreSQL Overview

What is new in PostgreSQL 9.6?

PostgreSQL 9.6 was released in late 2016 and is the last version that will still be following the old numbering scheme PostgreSQL has been using for more than a decade now. From PostgreSQL 10.0 onward, a new version numbering system will be in place. From 10.0 on, major releases will happen way more frequently.

Understanding new database administration functions

PostgreSQL 9.6 has many new features that can help the administrator to reduce work and make systems more robust.

One of those features is the idle_in_transaction_session_timeout function.

Killing idle sessions

In PostgreSQL, a session or a transaction can basically live almost forever. In some cases, this has been a problem because transactions were kept open for too long. Usually, this was due to a bug. The trouble is this: insanely long transactions can cause cleanup problems and table bloat can occur. The uncontrolled growth of a table (table bloat) naturally leads to performance problems and unhappy end users.

Starting with PostgreSQL 9.6, it is possible to limit the duration a database connection is allowed to spend inside a transaction without performing real work. Here is how it works:

test=# SET idle_in_transaction_session_timeout TO 2500; 
SET
test=# BEGIN;
BEGIN
test=# SELECT 1;
?column?
----------
1
(1 row)

test=# SELECT 1;
FATAL: terminating connection due to idle-in-transaction timeout

Administrators and developers can set a timeout, which is 2.5 seconds in my example. As soon as a transaction is idle for too long, the connection will be terminated automatically by the server. Nasty side effects of long idle transactions can be prevented easily by adjusting this parameter.

Finding more detailed information in pg_stat_activity

The pg_stat_activity function is a system view that has been around for many years. It basically contains a list of active connections. In older versions of PostgreSQL, administrators could see that a query is waiting for somebody else—however, it was not possible to figure out why and for whom. This has changed in 9.6. Two columns have been added:

test=# \d pg_stat_activity  
View "pg_catalog.pg_stat_activity"
Column | Type | Modifiers
------------------+--------------------------+-----------
...
wait_event_type | text |
wait_event | text |
...

In addition to this extension, a new procedure has been added, which shows who caused whom to wait:

test=# SELECT * FROM pg_blocking_pids(4711);
pg_blocking_pids
------------------
{3435}
(1 row)

When the function is called, it will return a list of blocking PIDs.

Tracking vaccum progress

For many years, people have asked for a progress tracker for vacuum. Finally, PostgreSQL 9.6 makes this wish come true by introducing a new system view. Here is how it works:

postgres=# SELECT * FROM pg_stat_progress_vacuum ;

­[ RECORD 1 ]­­­­­­+­­­­­­­­­­­­­­

pid | 29546
datid | 67535
datname | test
relid | 16402
phase | scanning heap
heap_blks_total | 6827
heap_blks_scanned | 77
heap_blks_vacuumed | 0
index_vacuum_count | 0
max_dead_tuples | 154
num_dead_tuples | 0

PostgreSQL will provide detailed information about ongoing vacuum processes so that people can track the progress of this vital operation.

Improving vacuum speed

PostgreSQL 9.6 not only provides you with deeper insights into what vacuum does at the moment, it will also speed up the process in general. From PostgreSQL 9.6 onward, PostgreSQL will keep track of all frozen pages and avoid vacuuming those pages.

Tables that are mostly read-only will massively benefit from this change, as vacuum load is drastically reduced.

Digging into new SQL and developer-related functions

One of the most promising new features of PostgreSQL is the ability to perform phrase searching. Up to 9.5 it was only possible to search for words—phrase searching was very hard to do. 9.6 nicely removes this limitation. Here is an example of how it works:

test=# SELECT phraseto_tsquery('Under pressure') @@                   to_tsvector('Something was under some sort of pressure');

?column?

----------

f

(1 row)

test=# SELECT phraseto_tsquery('Under pressure') @@ to_tsvector('Under pressure by David Bowie hit number 1 again');

?column?

----------

t

(1 row)

The first query returns false because the words we are looking for do not occur in the desired order. In the second example, true is returned because there really is a proper match.

However, there is more: in 9.6 it is possible to check whether words show up in a certain order. In the following example, we want a word to be between united and nations:

test=# SELECT tsquery('united <2> nations') @@                     to_tsvector('are we really united, happy nations?');

?column?

----------

t

(1 row)

test=# SELECT tsquery('united <2> nations') @@ to_tsvector('are we really at united nations?');

?column?

----------

f

(1 row)

The second example returns false as there is no word between united and nations.

Using new backup and replication functionality

PostgreSQL 9.6 has also seen improvements in the area of backup and recovery.

Streamlining wal_level and monitoring

The wal_level setting has always been a bit hard to understand for many people. Many were struggling with the difference between the archive and hot_standby settings. To remove this confusion altogether, both settings have been replaced with the easier-to-understand replica setting, which does the same as hot_standby.

In addition to that, the monitoring of replicated setups has been simplified. Prior to 9.6, there was only the pg_stat_replication view, which could be queried on the master to supervise the flow of data to the slave. Now it is also possible to monitor the flow of data on the slaves, by consulting the pg_stat_wal_receiver function. It is basically the slave-side mirror of the pg_stat_replication function and helps to determine the state of replication.

Using multiple synchronous standby servers

PostgreSQL has been able to perform synchronous replication for quite a while already. In PostgreSQL, it is possible to have more than just one synchronous server from 9.6 onward. Earlier, only one server had to acknowledge a commit. Now it is possible to have an entire group of servers that has to confirm a commit. This is especially important if you want to improve reliability in case of multi-node error.

The syntax to use this new feature is simple:

synchronous_standby_names = '3 (server1, server2, server3, server4) 

However, there is more to synchronous replication in PostgreSQL 9.6. Previously, PostgreSQL ensured (synchronous_commit = on) that the transaction log has reached the slave. However, this did not mean that data was actually visible. Consider an example: somebody adds a user to the master, instantly connects to the slave, and checks for the user. While the transaction log was guaranteed to be on the slave, it was not necessarily guaranteed that the data inside the log was already visible to the end user (due to replication conflicts and so on). By setting synchronous_commit = 'remote_apply', it is now possible to query the slave directly after a commit on the master, without having to worry that data might not be visible yet. The remote_apply value is slower than the on value but it allows to write more advanced applications.

Understanding performance-related features

Just like every release of PostgreSQL, there are numerous performance improvements, which can help to speedup applications. In this section, I want to focus on the most important and most powerful ones. Of course, there are many more small improvements than listed here.

Improving relation extensions

For many years PostgreSQL has extended a table (or an index) block by block. In the case of a single writer process, this was usually fine. However, in cases of high-concurrency writing, writing a block at a time was a source of contention and suboptimal performance. From 9.6 onward, PostgreSQL started to extend tables by multiple blocks at a time. The number of blocks added at a time is 20 times the number of waiting processes.

Checkpoint sorting and kernel interaction

When PostgreSQL writes changes to disk during a checkpoint, it now does so in a more orderly way to ensure that writes are more sequential than earlier. This is done by sorting blocks before sending them too. Random writes will be dramatically reduced this way, which in turn leads to higher throughput on most hardware.

Sorted checkpoints are not the only scalability thing to make it into 9.6. There are also new kernel write-back configuration options: what does this mean? In case of large caches, it could take quite a long time to write all changes out. This used to be especially nasty on systems with hundreds of gigabytes of memory because fairly intense I/O storms could happen. Of course, the operating system, level behavior of Linux could be changed using the /proc/sys/vm/dirty_background_ratio command. However, only a handful of consultants and system administrators actually knew how to do that and why. The checkpoint_flush_after, bgwriter_flush_after, and backend_flush_after functions can be used now to control the flush behavior. In general, the rule is to flush earlier. Still, as the feature is new, people are still gathering experience on how to use those settings in the most efficient way possible.

Using more advanced foreign data wrappers

Foreign data wrappers have been around for many years. Starting with PostgreSQL 9.6, the optimizer can use foreign tables way more efficiently. This includes join push down (joins can now already be performed remotely) and order push down (sorting can now happen remotely). Distributing data inside a cluster is now way more efficient due to faster remote operations.

Introducing parallel queries

Traditionally, a query had to run on a single CPU. While this was just fine in the OLTP world, it started to be a problem for analytical applications, which were bound to the speed provided by a single core. With PostgreSQL 9.6, parallel queries were introduced. Of course, implementing parallel queries was hard and so a lot of infrastructure has already been implemented over the years. All this infrastructure is now available to provide the end user with parallel sequential scans. The idea is to make many CPUs work on complicated WHERE conditions during a sequential scan. Version 9.6 also allowed for parallel aggregates and parallel joins. Of course, there is a lot of work left, but we are already looking at a major leap forward.

To control parallelism, there are two essential settings:

test=# SHOW max_worker_processes; 
max_worker_processes
----------------------
8
(1 row)

test=# SHOW max_parallel_workers_per_gather ;
max_parallel_workers_per_gather
---------------------------------
2
(1 row)

The first one limits the overall number of worker processes available. The second one controls the number of workers allowed per gather node.

A gather node is a new thing you will see in an execution plan. It is in charge of unifying results coming from parallel subprocesses.

In addition to those fundamental settings, there are a couple of new optimizer parameters to adjust the cost of parallel queries.

Adding snapshot too old

Those of you using Oracle would be aware of the following error message: snapshot too old. In Oracle, this message indicates that a transaction has been too long, so it has to be aborted. In PostgreSQL, transactions can run almost infinitely. However, long transactions can still be a problem, so the snapshot too old error has been added as a feature to 9.6, which allows transactions to be aborted after a certain amount of time.

The idea behind that is to prevent table bloat and to make sure that end users are aware of the fact that they might be about to do something stupid.