Book Image

Mastering Proxmox - Second Edition

By : Wasim Ahmed
Book Image

Mastering Proxmox - Second Edition

By: Wasim Ahmed

Overview of this book

Proxmox is an open source server virtualization solution that has enterprise-class features to manage virtual machines, to be used for storage, and to virtualize both Linux and Windows application workloads. You begin with refresher on the advanced installation features and the Proxmox GUI to familiarize yourself with the Proxmox VE hypervisor. You then move on to explore Proxmox under the hood, focusing on the storage systems used with Proxmox. Moving on, you will learn to manage KVM Virtual Machines and Linux Containers and see how networking is handled in Proxmox. You will then learn how to protect a cluster or a VM with a firewall and explore the new HA features introduced in Proxmox VE 4 along with the brand new HA simulator. Next, you will dive deeper into the backup/restore strategy followed by learning how to properly update and upgrade a Proxmox node. Later, you will learn how to monitor a Proxmox cluster and all of its components using Zabbix. By the end of the book, you will become an expert at making Proxmox environments work in production environments with minimum downtime.
Table of Contents (21 chapters)
Mastering Proxmox - Second Edition
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Production Ceph cluster


As mentioned throughout this book, Ceph is a very resilient distributed storage system that pairs well with Proxmox to store virtual disk images. There are a few key factors that make Ceph cluster a good choice for production-level virtual environment.

Forget about hardware RAID

When it comes to Ceph node and cluster, we can forget about hardware-based RAID. Instead, we have to think multi-node or clustered RAID. That is because of the distributed nature of Ceph and how data is dispersed in all the drives in the cluster regardless in which node the drive is. With Ceph, we no longer need to worry about a device failure in a particular node. Ceph performs best when it is given access to each drive directly without any RAID in the middle. If we are to place drives in RAID per node, we will actually hurt Ceph immensely and take away everything that makes Ceph great. We can, however, still use the RAID interface card to implement JBOD configuration or to be able to connect...