Book Image

Mastering Proxmox - Third Edition

By : Wasim Ahmed
4 (1)
Book Image

Mastering Proxmox - Third Edition

4 (1)
By: Wasim Ahmed

Overview of this book

Proxmox is an open source server virtualization solution that has enterprise-class features for managing virtual machines, for storage, and to virtualize both Linux and Windows application workloads. You'll begin with a refresher on the advanced installation features and the Proxmox GUI to familiarize yourself with the Proxmox VE hypervisor. Then, you'll move on to explore Proxmox under the hood, focusing on storage systems, such as Ceph, used with Proxmox. Moving on, you'll learn to manage KVM virtual machines, deploy Linux containers fast, and see how networking is handled in Proxmox. You'll also learn how to protect a cluster or a VM with a firewall and explore the new high availability features introduced in Proxmox VE 5.0. Next, you'll dive deeper into the backup/restore strategy and see how to properly update and upgrade a Proxmox node. Later, you'll learn how to monitor a Proxmox cluster and all of its components using Zabbix. Finally, you'll discover how to recover Promox from disaster strikes through some real-world examples. By the end of the book, you'll be an expert at making Proxmox work in production environments with minimal downtime.
Table of Contents (23 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Installing a Ceph cluster


The following diagram is a basic representation of Proxmox and a Ceph cluster. Note that both the clusters are on separate subnets on separate switches:

A Ceph cluster should be set up with a separate subnet on a separate switch to keep it isolated from the Proxmox public subnet and for optimal Ceph cluster functioning. The Ceph Sync LAN is used by Ceph primarily to sync data between OSDs. The Ceph Public LAN is used primarily to serve user requests for data from Ceph into Proxmox VMs. The advantage of this practice is to keep Ceph's internal traffic isolated so that it does not interfere with the traffic of the running virtual machines. On a healthy Ceph cluster with the active+clean state, this is not an issue. However, when Ceph goes into self-healing mode due to an OSD or node failure, it rebalances itself by redistributing PGs among remaining OSDs, which causes very high bandwidth consumption. Separating two clusters ensures that the cluster does not slow down...