Book Image

PostgreSQL 12 High Availability Cookbook - Third Edition

By : Shaun Thomas
Book Image

PostgreSQL 12 High Availability Cookbook - Third Edition

By: Shaun Thomas

Overview of this book

Databases are nothing without the data they store. In the event of an outage or technical catastrophe, immediate recovery is essential. This updated edition ensures that you will learn the important concepts related to node architecture design, as well as techniques such as using repmgr for failover automation. From cluster layout and hardware selection to software stacks and horizontal scalability, this PostgreSQL cookbook will help you build a PostgreSQL cluster that will survive crashes, resist data corruption, and grow smoothly with customer demand. You’ll start by understanding how to plan a PostgreSQL database architecture that is resistant to outages and scalable, as it is the scaffolding on which everything rests. With the bedrock established, you'll cover the topics that PostgreSQL database administrators need to know to manage a highly available cluster. This includes configuration, troubleshooting, monitoring and alerting, backups through proxies, failover automation, and other considerations that are essential for a healthy PostgreSQL cluster. Later, you’ll learn to use multi-master replication to maximize server availability. Later chapters will guide you through managing major version upgrades without downtime. By the end of this book, you’ll have learned how to build an efficient and adaptive PostgreSQL 12 database cluster.
Table of Contents (17 chapters)

Overview of multi-master

Consider the following diagram:

We can see in preceding diagram that there are two PostgreSQL nodes: Node A sends data from the WAL to Node B via WAL streaming, a feature that has been available since PostgreSQL 9.0.

The primary divergence here from regular streaming replication is the element labeled LD, which in this case stands for logical decoder. Node B contains a similar additional element that we've labeled LA for logical apply.

In standard streaming replication, WAL is transmitted unchanged and applied to the data files exactly as it is received. While fast and efficient, this meant that every streaming replica was required to be an exact copy of the upstream system. This is fine for distributing read traffic, but is somewhat limited in application since we can't simply copy a few tables, import only some data, and so on.

When logical...