Book Image

VMware Performance and Capacity Management, Second Edition - Second Edition

By : Sunny Dua
Book Image

VMware Performance and Capacity Management, Second Edition - Second Edition

By: Sunny Dua

Overview of this book

Performance management and capacity management are the two top-most issues faced by enterprise IT when doing virtualization. Until the first edition of the book, there was no in-depth coverage on the topic to tackle the issues systematically. The second edition expands the first edition, with added information and reorganizing the book into three logical parts. The first part provides the technical foundation of SDDC Management. It explains the difference between a software-defined data center and a classic physical data center, and how it impacts both architecture and operations. From this strategic view, it zooms into the most common challenges—performance management and capacity management. It introduces a new concept called Performance SLA and also a new way of doing capacity management. The next part provides the actual solution that you can implement in your environment. It puts the theories together and provides real-life examples created together with customers. It provides the reasons behind each dashboard, so that you get the understanding on why it is required and what problem it solves. The last part acts as a reference section. It provides a complete reference to vSphere and vRealize Operations counters, explaining their dependencies and providing practical guidance on the values you should expect in a healthy environment.
Table of Contents (28 chapters)
VMware Performance and Capacity Management Second Edition
Credits
Foreword
Foreword
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Part 1
Part 2
Part 3
Index

Not all virtualizations are equal


There are plenty of misconceptions about the topic of virtualization, especially among IT folks who are not familiar with virtualization. CIOs who have not felt the strategic impact of virtualization (be it a good or bad experience) tend to carry these misconceptions. Although virtualization looks similar to a physical system from the outside, it is completely re-architected under the hood.

So let's take a look at the first misconception: what exactly is virtualization?

Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualized. This is a typical strategy of IT vendors that have similar technology. A popular technology often branded under virtualization is hardware partitioning; once it is parked under the umbrella of virtualization, both are expected be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.

Partitioning and virtualization are two different architectures in computer engineering, resulting in major differences between their functionalities. They are shown in the following screenshot:

Virtualization versus partitioning

With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. This is why some technical manuals about partitioning technology do not even use the term "VM". They use the terms "domain", "partition", or "container" instead.

There are two variants of partitioning technology, hardware-level and OS-level partitioning, which are covered in the following bullet points:

  • In hardware-level partitioning, each partition runs directly on the hardware. It is not virtualized. This is why it is more scalable and has less of a performance hit. Because it is not virtualized, it has to have an awareness of the underlying hardware. As a result, it is not fully portable. You cannot move the partition from one hardware model to another. The hardware has to be built for the purpose of supporting that specific version of the partition. The partitioned OS still needs all the hardware drivers and will not work on other hardware if the compatibility matrix does not match. As a result, even the version of the OS matters, as it is just like the physical server.

  • In OS-level partitioning, there is a parent OS that runs directly on the server motherboard. This OS then creates an OS partition, where another "OS" can run. I use double quotes as it is not exactly the full OS that runs inside that partition. The OS has to be modified and qualified to be able to run as a Zone or Container. Because of this, application compatibility is affected. This is different in a VM, where there is no application compatibility issue because the hypervisor is transparent to the Guest OS.

Hardware partitioning

We covered the difference from an engineering point of view. However, does it translate into different data center architectures and operations? We will focus on hardware partitioning as there are fundamental differences between hardware partitioning and software partitioning. The use cases for both are also different. Software partitioning is typically used in native cloud applications.

With that, let's do a comparison between hardware partitioning and virtualization. Let's take availability as a start.

With virtualization, all VMs become protected by vSphere High Availability (vSphere HA)—100 percent protection and that too done without VM awareness. Nothing needs to be done at the VM layer. No shared or quorum disk and no heartbeat-network VM is required to protect a VM with basic HA.

With hardware partitioning, protection has to be configured manually, one by one for each Logical Partition (LPAR) or Logical Domain (LDOM). The underlying platform does not provide it.

With virtualization, you can even go beyond five nines, that is, 99.999 percent, and move to 100 percent with vSphere Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn on and off the Fault Tolerance capability on demand. Fault Tolerance is fully defined in the software.

Another area of difference between partitioning and virtualization is Disaster Recovery (DR). With partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can perform a Storage Area Network (SAN) boot, but that means another Logical Unit Number (LUN) is required to manage, zone, replicate, and so on. DR is not scalable to thousands of servers. To make it scalable, it has to be simpler.

Compared to partitioning, virtualization takes a different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if it were one object. This is what vSphere Replication in vSphere or Site Recovery Manager does. It performs a replication per VM; there is no need to configure SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and has audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR, LDOM, or container.

In the previous paragraph, we're not implying LUN-based or hardware-based replication to be inferior solutions. We're merely driving the point that virtualization enables you to do things differently.

We're also not saying that hardware partitioning is an inferior technology. Every technology has its advantages and disadvantages and addresses different use cases. Before I joined VMware, I was a Sun Microsystems sales engineer for 5 years, so I'm aware of the benefit of UNIX partitioning. This book is merely trying to dispel the misunderstanding that hardware partitioning equals virtualization.

OS partitioning

We've covered the differences between hardware partitioning and virtualization.

Let's switch gears to software partitioning. In 2016, the adoption of Linux containers will continue its rapid rise. You can actually use both containers and virtualization, and they complement each other in some use cases. There are two main approaches to deploying containers:

  • Running them directly on bare metal

  • Running them inside a Virtual Machine

As both technologies evolve, the gap gets wider. As a result, managing a software partition is different from managing a VM. Securing a container is different to securing a VM. Be careful when opting for a management solution that claims to manage both. You will probably end up with the most common denominator. This is one reason why VMware is working on vSphere Integrated Containers and the Photon platform. Now that's a separate topic by itself!