Book Image

Mastering Proxmox - Third Edition

By : Wasim Ahmed
4 (1)
Book Image

Mastering Proxmox - Third Edition

4 (1)
By: Wasim Ahmed

Overview of this book

Proxmox is an open source server virtualization solution that has enterprise-class features for managing virtual machines, for storage, and to virtualize both Linux and Windows application workloads. You'll begin with a refresher on the advanced installation features and the Proxmox GUI to familiarize yourself with the Proxmox VE hypervisor. Then, you'll move on to explore Proxmox under the hood, focusing on storage systems, such as Ceph, used with Proxmox. Moving on, you'll learn to manage KVM virtual machines, deploy Linux containers fast, and see how networking is handled in Proxmox. You'll also learn how to protect a cluster or a VM with a firewall and explore the new high availability features introduced in Proxmox VE 5.0. Next, you'll dive deeper into the backup/restore strategy and see how to properly update and upgrade a Proxmox node. Later, you'll learn how to monitor a Proxmox cluster and all of its components using Zabbix. Finally, you'll discover how to recover Promox from disaster strikes through some real-world examples. By the end of the book, you'll be an expert at making Proxmox work in production environments with minimal downtime.
Table of Contents (23 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Ceph cluster production


As mentioned throughout this book, Ceph is a very resilient distributed storage system that pairs well with Proxmox to store virtual disk images. There are a few key factors that make a Ceph cluster a good choice for a production-level virtual environment.

Forget about hardware RAID

When it comes to Ceph nodes and clusters, we can forget about hardware-based RAID. Instead, we have to think multi-node or clustered RAID. That is because of the distributed nature of Ceph and how data is dispersed in all the drives in the cluster regardless of which node the drive is in. With Ceph, we no longer need to worry about a device failure in a particular node. Ceph performs best when it is given access to each drive directly without any RAID in the middle. If we are to place drives in RAID per node, we will actually hurt Ceph immensely and take away everything that makes Ceph great. We can, however, still use the RAID interface card to implement JBOD configuration or to be able...