The pg_dump
utility is more of a traditional form of creating a backup. It makes sense for a small amount of data, but it tends to have its limitations as soon as the amount of data grows beyond a certain limit. Don't get me wrong; pg_dump
works perfectly even with terabytes of data. However, let's assume you've got a dump of a 10 TB beast! Does it really make sense to replay a 10 TB database from a dump? Just consider all the indexes that have to be built, and consider the insane amount of time it will take to do that. It definitely makes sense to use a different method. This method is called point-in-time recovery (PITR), or simply xlog archiving. In this section, you will learn about PITR in detail.
Troubleshooting PostgreSQL
Troubleshooting PostgreSQL
Overview of this book
Table of Contents (17 chapters)
Troubleshooting PostgreSQL
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Free Chapter
Installing PostgreSQL
Creating Data Structures
Handling Indexes
Reading Data Efficiently and Correctly
Getting Transactions and Locking Right
Writing Proper Procedures
PostgreSQL Monitoring
Fixing Backups and Replication
Handling Hardware and Software Disasters
A Standard Approach to Troubleshooting
Index
Customer Reviews