Book Image

Mastering MongoDB 6.x - Third Edition

By : Alex Giamas
Book Image

Mastering MongoDB 6.x - Third Edition

By: Alex Giamas

Overview of this book

MongoDB is a leading non-relational database. This book covers all the major features of MongoDB including the latest version 6. MongoDB 6.x adds many new features and expands on existing ones such as aggregation, indexing, replication, sharding and MongoDB Atlas tools. Some of the MongoDB Atlas tools that you will master include Atlas dedicated clusters and Serverless, Atlas Search, Charts, Realm Application Services/Sync, Compass, Cloud Manager and Data Lake. By getting hands-on working with code using realistic use cases, you will master the art of modeling, shaping and querying your data and become the MongoDB oracle for the business. You will focus on broadly used and niche areas such as optimizing queries, configuring large-scale clusters, configuring your cluster for high performance and availability and many more. Later, you will become proficient in auditing, monitoring, and securing your clusters using a structured and organized approach. By the end of this book, you will have grasped all the practical understanding needed to design, develop, administer and scale MongoDB-based database applications both on premises and on the cloud.
Table of Contents (22 chapters)
1
Part 1 – Basic MongoDB – Design Goals and Architecture
4
Part 2 – Querying Effectively
11
Part 3 – Administration and Data Management
16
Part 4 – Scaling and High Availability

Cluster backups

A well-known maxim goes as follows:

“Hope for the best, plan for the worst.”

                                                                                                   – John Jay (1813)

This should be our approach when designing our backup strategy for MongoDB. There are several distinct failure events that can happen.

Backups should be the cornerstone of our disaster recovery strategy in case something happens. Some developers may rely on replication for disaster recovery, as it seems that having three copies of our data is more than enough. We can always rebuild the cluster from the other two copies in case one of the copies is lost.

This is the case in the event of disks...