Book Image

Mastering Proxmox - Third Edition

By : Wasim Ahmed
Book Image

Mastering Proxmox - Third Edition

By: Wasim Ahmed

Overview of this book

Proxmox is an open source server virtualization solution that has enterprise-class features for managing virtual machines, for storage, and to virtualize both Linux and Windows application workloads. You'll begin with a refresher on the advanced installation features and the Proxmox GUI to familiarize yourself with the Proxmox VE hypervisor. Then, you'll move on to explore Proxmox under the hood, focusing on storage systems, such as Ceph, used with Proxmox. Moving on, you'll learn to manage KVM virtual machines, deploy Linux containers fast, and see how networking is handled in Proxmox. You'll also learn how to protect a cluster or a VM with a firewall and explore the new high availability features introduced in Proxmox VE 5.0. Next, you'll dive deeper into the backup/restore strategy and see how to properly update and upgrade a Proxmox node. Later, you'll learn how to monitor a Proxmox cluster and all of its components using Zabbix. Finally, you'll discover how to recover Promox from disaster strikes through some real-world examples. By the end of the book, you'll be an expert at making Proxmox work in production environments with minimal downtime.
Table of Contents (23 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Recovering from Ceph failure


Ceph is a very resilient, highly available storage system. Once a Ceph cluster is configured, for the most part, it can run maintenance free. In most cases, lack of knowledge on how Ceph works leads to major issues, causing cluster-side interference. In this section, we will highlight some of the most common issues and how to combat them in a Ceph cluster. 

Best practices for a healthy Ceph cluster

The following are a few best practices to keep a Ceph cluster running healthy:

  • If possible, keep all settings to default for a healthy cluster.
  • Use Ceph pool only to implement a different OSD type policy and not for multitenancy, such as one pool for SSDs and another for HDDs.
  • Do not make frequent Ceph configuration changes. It adds extra workload on the cluster OSDs, reducing the life of HDDs. After each change, let the cluster rebalance data before making new changes. 
  • Always keep in mind the core count of Ceph nodes when adjusting Ceph threads. Do not let the number of...