Book Image

VMware vRealize Operations Performance and Capacity Management

By : Iwan 'e1' Rahabok
Book Image

VMware vRealize Operations Performance and Capacity Management

By: Iwan 'e1' Rahabok

Overview of this book

Table of Contents (18 chapters)
VMware vRealize Operations Performance and Capacity Management
Credits
Foreword
Foreword
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Index

Virtual Machine – it is not what you think!


VM is not just a physical server virtualized. Yes, there is a P2V process. However, once it is virtualized, it takes on a new shape. That shape has many new and changed properties, and some old properties are no longer applicable or available. My apologies if the following is not the best analogy:

"We P2V the soul, not the body."

On the surface, a VM looks like a physical server. So let's actually look at the VM property. The following screenshot shows a VM setting in vSphere 5.5. It looks familiar as it has a CPU, Memory, Hard disk, Network adapter, and so on. However, look at it closely. Do you see any property that you don't usually see in a physical server?

VM property in vSphere 5.5

Let me highlight some of the properties that do not exist in a physical server. I'll focus on those properties that have an impact on management, as management is the topic of this book.

At the top of the dialog box, there are four tabs:

  • Virtual Hardware

  • VM Options

  • SDRS Rules

  • vApp Options

The Virtual Hardware tab is the only tab that has similar properties to a physical server. The other three tabs do not have their equivalent server. For example, SDRS Rules pertains to Storage DRS. That means the VM storage can be automatically moved by vCenter. Its location in the data center is not static. This includes the drive where the OS resides (the C:\ drive in Windows). This directly impacts your server management tool. It has to have awareness of Storage DRS, and can no longer assume that a VM is always located in the same datastore or LUN. Compare this with the physical server. Its OS typically resides on a local disk, which is part of the physical server. You don't want your physical server's OS drive being moved around in a data center, do you?

In the Virtual Hardware tab, notice the New device option at the bottom of the screen. Yes, you can add devices, some of them on the fly while Windows or Linux is running. All the VM's devices are defined in the software. This is a major difference to the physical server, where the physical hardware defines it and you cannot change it. With virtualization, you can have the ESXi host with two sockets but the VM has five sockets. Your server management tool needs to be aware of this and recognize that the new Configuration Management Database (CMDB) is now vCenter.

The next screenshot shows a bit more detail. I've expanded the CPU device. Again, what do you see that does not exist in a physical server?

VM CPU and Network property tab in vSphere 5.5

Let me highlight some of the options. Look at Reservation, Limit, and Shares. None of them exist in a physical server, as a physical server is standalone by default. It does not share any resource on the motherboard (CPU and RAM) with another server. With these three levers, you can perform Quality of Service (QoS) in a virtual data center. Another point: QoS is actually built into the platform. This has an impact on management, as the platform is able to do some of the management by itself. There is no need to get another console to do what the platform provides you out of the box.

Other properties in the previous screenshot, such as Hardware virtualization, Performance counters, HT Sharing, and CPU/MMU Virtualization also do not exist in the physical server. It is beyond the scope of this book to explain every feature, and there are many blogs and technical papers freely available on the Internet that explain them. Some of my favorites are http://blogs.vmware.com/performance/ and http://www.vmware.com/vmtn/resources/.

The next screenshot shows the VM Options tab. Again, what properties do you see that do not exist in a physical server?

VM Options tab in vSphere 5.5

I'd like to highlight a few of the properties present in the VM Options tab. The VMware Tools property is a key and highly recommended component. It provides you with drivers and improves manageability. The VMware Tools property is not present in a physical server. A physical server has drivers but none of them are from VMware. A VM, however, is different. Its motherboard (virtual motherboard, naturally) is defined and supplied by VMware. Hence, the drivers are supplied by VMware. VMware Tools is the mechanism to supply those drivers. VMware Tools comes in different versions. So now you need to be aware of VMware Tools and it becomes something you need to manage.

I've just covered a few VM properties from the VM setting dialog box. There are literally hundreds of properties in VM that do not exist in the physical world. Even the same properties are implemented differently. For example, although vSphere supports N_Port ID Virtualization (NPIV), the Guest OS does not see the World Wide Name (WWN). This means the data center management tools have to be aware of the specific implementation by vSphere. And these properties change with every vSphere release. Notice the sentence right at the bottom. It says Compatibility: ESXi 5.5 and later (VM version 10). This is your VM motherboard. It has dependency on the ESXi version and yes, this becomes another new thing to manage too.

Every vSphere release typically adds new properties too, making a VM more manageable than a physical machine, and differentiating a VM further than a physical server.

Hopefully, I've driven home the point that a VM is very different from a physical server. I'll now list the differences from the management point of view. The following table shows the differences that impact how you manage your infrastructure. Let's begin with the core properties:

Property

Physical server

Virtual machine

BIOS

A unique BIOS for every brand and model. Even the same model (for example, HP DL 380 Generation 7) can have multiple versions of BIOS.

BIOS needs updates and management, often with physical access to a data center. This requires downtime.

This is standardized in a VM. There is only one type, which is the VMware motherboard. This is independent from the ESXi motherboard.

VM BIOS needs far less updates and management. The inventory management system no longer needs the BIOS management module.

Virtual HW

Not applicable

This is a new layer below BIOS.

It needs an update on every vSphere release. A data center management system needs to be aware of this as it requires a deep knowledge of vSphere. For example, to upgrade the Virtual Hardware, the VM has to be in the power-off stage.

Drivers

Many drivers are loaded and bundled with the OS.

Need to manage all of these drivers. This is a big area in the physical world, as they vary from model to model and brand to brand. The management tool has rich functionalities, such as checking compatibility, rolling out drivers, rolling back if there is an issue, and so on.

Almost no drivers are loaded with the OS; some drivers are replaced by VMware Tools.

VMware Tools is the new driver, replacing all other drivers. Even with NPIV, the VM does not need the FC HBA driver. VMware Tools needs to be managed, with vCenter being the most common management tool.

Hardware upgrade

It is done offline and is complex.

OS reinstallation and updates are required, hence it is a complex project in the physical world. Sometimes, a hardware upgrade is not even possible without upgrading the application. Virtualization decouples the application from hardware dependency.

It is done online and is simple.

A VM can be upgraded from a 5-year-old hardware to a new one, moving from the local SCSI disk to 10 Gb FCoE, from dual core to a 15-core CPU. So yes, MS-DOS can run on 10 Gb FCoE accessing SSD storage via the PCIe lane. You just need to perform vMotion to the new hardware. As a result, the operation is drastically simplified.

In the preceding table, we compared the core properties of a physical server with a VM. Let's now compare the surrounding properties. The difference is also striking when we compare the area related to the physical server or VM:

Property

Physical server

Virtual machine

Storage

For servers connected to SAN, they can see the SAN and FC fabric. They need HBA drivers and have FC PCI cards, and have multipathing software installed.

Normally needs an advanced filesystem or volume manager to RAID local disk.

No VM is connected to FC fabric or the SAN. VM only sees the local disk. Even with NPIV, the VM does not send FC frames. Multipathing is provided by vSphere, transparent to VM.

There is no need for RAID local disk. It is one virtual disk, not two. Availability is provided at the hardware layer.

Backup

Backup agent and backup LAN needed in the majority of cases.

Not needed in the majority of cases, as backup is done via vSphere VADP API. Agent is only required for application-level backup.

Network

NIC teaming is common. Typically needs two cables per server.

Guest OS is VLAN aware. It is configured inside the OS. Moving VLAN requires reconfiguration.

NIC teaming provided by ESXi. VM is not aware and only sees one vNIC.

VLAN is provided by vSphere, transparent to VM. VM can be moved from one VLAN to another live.

Antivirus (AV)

The AV agent is installed on Guest.

AV consumes OS resources and can be seen by the attacker. AV signature updates cause high storage throughput.

An AV agent runs on the ESXi host as a VM (one per ESXi).

AV does not consume the Guest OS resources and it cannot be seen by the attacker from inside the Guest OS. AV signature updates do not require high IOPS inside the Guest OS. The total IOPS is also lower at the ESXi host level as it is not done per VM.

Lastly, let's take a look at the impact on management and monitoring. As can be seen next, even the way we manage the servers changes once they are converted into VMs:

Property

Physical server

Virtual machine

Monitoring

An agent is commonly deployed. It is typical for a server to have multiple agents.

In-Guest counters are accurate.

A physical server has an average of 5 percent CPU utilization due to the multicore chip. As a result, there is no need to monitor it closely.

An agent is typically not deployed. Certain areas such as application and Guest OS monitoring are still best served by an agent.

The key in-Guest counters are not accurate.

A VM has an average of 50 percent CPU utilization as it is right sized. This is 10 times higher when compared with a physical server. As a result, there is a need to monitor closely, especially when physical resources are oversubscribed. Capacity management becomes a discipline in itself.

Availability

HA is provided by clusterware such as MSCS and Veritas Cluster.

Cloning a physical server is a complex task and requires the boot drive to be on the SAN or LAN, which is not typical.

Snapshot is rarely done, due to cost and complexity.

HA is a built-in core component of vSphere. Most clustered physical servers end up as just a single VM as vSphere HA is good enough.

Cloning can be done easily. It can even be done live. The drawback is that the clone becomes a new area of management.

Snapshot can be done easily. In fact, this is done every time as part of backup process. Snapshot also becomes a new area of management.

Asset

The physical server is an asset and it has book value. It needs proper asset management as components vary among servers.

Here, the stock-take process is required.

VM is not an asset as it has no accounting value. A VM is like a document. It is technically a folder with files in it.

Stock-take is no longer required as the VM cannot exist outside vSphere.