Book Image

Designing Hyper-V Solutions

By : Saurabh Grover, Goran Svetlecic
Book Image

Designing Hyper-V Solutions

By: Saurabh Grover, Goran Svetlecic

Overview of this book

Table of Contents (18 chapters)
Designing Hyper-V Solutions
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Index

Windows Server 2012 Hyper-V (R1/R2) – the challenger or the new champion?


It has been a late realization, but after a decade of research and understanding customer requirements and post multiple releases, Microsoft has finally come out with a stable, feature-rich, and yet economical virtualization platform in the third release of Hyper-V. The software vendor's goal and vision, as per their data sheet, is to provide a consistent platform for the infrastructure, apps, and data—the Cloud OS. They are almost there, but the journey up to this was an interesting one.

Hyper-V 1.0, released with the Windows Server 2008 64-bit platform, was mocked by the entire IT community, but it was more of a prototype meant to crash. Hyper-V does not come with 32-bit (x86) Windows platforms, though incidentally it was released as an x86 platform in beta versions. The next version was Hyper-V 2.0, which came out with Windows Server 2008 R2. This also marked the end of 32-bit server OS releases from Microsoft. Windows Server 2008 R2 was only available on x64 (64-bit) platforms. The second release of Hyper-V was quite stable and with dynamic memory and feasibility of the Windows GUI. It was well received and adopted by the IT community. However, it lacked the scalability and prowess of VMware's ESX and ESXi servers. The primary use case was cost when setting up an economical but not workload-intensive infrastructure. Windows Server 2012 came out with the third release of Hyper-V. It almost bridged the gap between ESXi and Hyper-V and changed the scales of market shares in Microsoft's favor, though VMware is still the market leader for now. There were many new features and major enhancements introduced to the virtualization stack, and features such as virtual SAN were added, which reduced the dependency of VMs on the parent partition. Windows Server 2012 R2 did not come with a major release but with some improvements and innovations to the third release. However, before we discuss the features and technical requirements of Hyper-V 2012 R2, let's first cover the architecture of Hyper-V.

The Hyper-V architecture – under the hood

It's imperative to know the underlying components that make up the architecture of Hyper-V, and how they function in tandem. This not only helps in designing a framework, but more importantly assists in troubleshooting a scenario.

In one of the previous sections, we discussed what hypervisors are and also that they run either bare-metal or hosted. However, before we proceed further with the terms related to Hyper-V, let's check out what OS Protection rings or access modes are. Rings are protection boundaries enforced by the operating system via the CPU or processor access mode. In a standard OS architecture, there are four rings.

The innermost, Ring 0, runs just above the hardware, which is the OS kernel and has high privileged CPU access. Ring 1 and Ring 2 are device drivers, or privileged code. Ring 3 is for user applications. On the Windows OS, there are just two rings: Ring 0 for the kernel mode and Ring 3 for the user mode processor access. Refer to the following diagram to understand this:

Figure 1-2: OS Protection Rings

Hyper-V is a Type-1 hypervisor. It runs directly on hardware, ensures allocation of compute and memory resources for virtual machines, and provides interfaces for administration and monitoring tools. It is installed as a Windows Server role on the host, and moves the host OS into the parent or root partition, which now holds the virtualization stack and becomes the management operating system for VM configuration and monitoring.

Since Hyper-V runs directly on hardware and handles CPU allocation tasks, it needs to run in Ring 0. However, this also indicates a possible conflict state with the OS kernel of both the parent partition and other VMs whose kernel modes are designed to run in Ring 0 only. To sort this, Intel and AMD facilitate hardware-assisted virtualization on their processors, which provide an additional privilege mode called the Ring-1 (minus 1), and Hyper-V (a Type 1 Hypervisor) slips into this ring. In other words, Hyper-V will run only on processors that support hardware-assisted virtualization. The following diagram depicts the architecture and various components that are the building blocks of the Hyper-V framework:

Figure 1-3: Hyper-V Architecture

Let's define some of the components that build up the framework:

  • Virtualization stack: This is a collection of components that make up Hyper-V, namely the user interface, management services, virtual machine processes, providers, emulated devices, and so on.

  • Virtual Machine Management service (VMM Service): This service maintains the state of the virtual machines hosted in the child partitions, and controls the tasks that can be performed on a virtual machine based on its current state (for example, taking snapshots). When a virtual machine is booted up, the VMM Service creates a virtual machine worker process for it.

  • Virtual Machine Worker Process (VMWP): The VMM Service creates a VMWP (vmwp.exe) for every corresponding Hyper-V virtual machine, and manages the interaction between the parent partition and the virtual machines in the child partitions. The VMWP manages all VM operations, such as creating and configuring, snapshotting and restoring, running, pausing and resuming, and live migrating the associated virtual machine.

  • WMI Provider: This allows VMMS to interface with virtual machines and management agents.

  • Virtual Infrastructure Driver (VID): Also referred to as VID, this is responsible for providing partition management services, virtual processor management services, and memory management services for virtual machines running in partitions.

  • Windows Hypervisor Interface Library (WinHv): This binary assists the operating system's drivers, parent partitions, or child partitions, to contact the hypervisor via standard Windows API calls rather than hyper-calls.

  • VMBus: This is responsible for interpartition communication and is installed with integration services.

  • Virtualization/Virtual Service Providers (VSP): This resides in the management OS and provides synthetic device access through the VMBus to virtualization service clients in child partitions.

  • Virtualization/Virtual Service Clients (VSC): This is another one of the integration components that reside in child partitions, and communicates child partitions' device I/O requests over VMBus.

One entity that is not explicitly depicted in the preceding diagram is Integration Services, also referred to as Integration Components. It is a set of utilities and services—some of which have been mentioned in the preceding list—installed on the VMs to make them hypervisor aware or enlightened. This includes a hypervisor-aware kernel, Hyper-V-enlightened I/O, virtual server client (VSC) drivers, and so on. Integration services, along with driver support for a virtual device, provide these five services for VM management:

  • Operating system shutdown: The service allows the management agents to perform a graceful shutdown of the VM.

  • Time synchronization: The service allows a virtual machine to sync its system clock with the management operating system.

  • Data exchange: The service allows the management operating system to detect information about the virtual machine, such as its guest OS version, FQDN, and so on.

  • Heartbeat: The service allows Hyper-V to verify the health of the virtual machine, whether it's running or not.

  • Backup (volume snapshot): The service allows the management OS to perform a VSS-aware backup of the VM.

Here's a glimpse of the first Hyper-V setting for this title. The following screenshot shows the Integration Services section from a Virtual Machine Settings applet:

Figure 1-4: Integration Services

Hyper-V allows hosting of multiple guest operating systems in child partitions. Based on whether the VMs have IS installed, we can identify them as follows:

  • Enlightened Windows guest machines: The Windows virtual machines that are Hyper-V aware are referred to as enlightened. They should either have latest Integration components built in by default (for example, a Windows Server 2012 R2 VM is considered enlightened if it is based on a Windows Server 2012 R2 host), or have the Integration Services installed on them. Integration components install VSCs, as stated earlier, and act as device drivers for virtual devices. VSCs communicate and transfer VM device requests to VSP via VMBus.

  • Enlightened non-Windows guest machines: Beyond Windows, Microsoft supports multiple flavors of Linux (for example, RHEL, SUSE, and a few others), contrary to rumors spread by some communities that Hyper-V does not support Linux. Linux guest machines are very much supported, and MS provides LIS (short for Linux Integration Services) drivers for optimum performance from Hyper-V virtual devices integrated with them.

    At the time of writing this book, the latest release of LIS is version 3.5. The LIC ISO is available for download for older Linux distributions. The newer distributions of Linux are pre-enlightened as they have the LIS built into them by default.

  • Unenlightened guest machines: Windows, Linux, or other platforms that are not enlightened or have Integration Services installed are unaware of Hyper-V. However, Hyper-V allows emulation for device and CPU access. The demerit is that emulated devices do not provide high performance and cannot leverage the rich virtual machine management infrastructure via Integration Services.

Windows Hyper-V 2012 R2 – technical requirements

Before we move on to the feature review of Hyper-V 2012 R2, let's consider the prerequisites of a Hyper-V host implementation. In the next chapter, we will look at it in detail.

Ever since its inception in the RTM release, Hyper-V runs on an x64 (64-bit) platform and requires an x64 processor. The CPU should fulfill the following criteria:

Note

RTM means Release To Market and is used for software development life cycle.

  • Hardware-assisted virtualization: These processors include a virtualization option that provides an additional privilege mode below Ring 0 (Ring 1). Intel calls this feature Intel VT-x, and AMD brands it as AMD-V on their processors.

  • Hardware-enforced Data Execution Prevention (DEP): This feature is a security requirement from a Windows standpoint for preventing malicious code from being executed from the system memory locations. With DEP, the system memory locations are tagged as non-executable. The setting is enabled from BIOS. In Intel, the setting for DEP is called the XD bit (Execute Disable bit), and in the case of AMD, it is called the NX bit (No Execute bit). In Hyper-V, this setting is imperative, as it secures the VMBus to be used as a vulnerable connection to attack the host OS.

Windows Hyper-V 2012 R2 – what it brings to the table

Windows Server 2012 was released with a box full of goodies for admins and architects, but there was room for more. In the previous section, we took a brief look at the features that were rolled out with Windows Server 2012. R2 introduced very few but significant changes, as well as some noteworthy improvements to previously introduced features. In the last section of this chapter, there will be long list of features and gotchas from Hyper-V 2012 R2 compared to VMware's ESXi, but here let's look at the few important features for consideration:

  • Generation 2 virtual machines: This has been one of the most talked-about inclusions in this release. In Hyper-V 2012 R2, there are two supported generations for virtual machines:

    • Generation 1: This still uses the old virtual hardware recipe available from previous Hyper-V releases, emulating the old Intel chipset.

    • Generation 2: This introduces a new set of virtual hardware, breaking the dependency on the older virtual hardware. It offers UEFI 2.0 firmware support and allows VM to boot off a SCSI virtual disk or DVD. It also adds the capability of PXE boot to a standard network adapter (doing away with legacy NIC). For now, four operating systems are supported on Generation 2 VMs: client OSes include Windows 8 and 8.1, and server OSes include Windows Server 2012 and 2012 R2.

  • Hyper-V replica: The disaster recovery solution inside Hyper-V has finally included the change requested by many admins. Previously, administrators could create an offline copy of a VM on a second Hyper-V Server. If the first server failed, as a disaster recovery process, the replica would be brought online. With 2012 R2, it is possible to extend the replication ability to a third replica server, which will ensure further business continuity coverage. Earlier, the replica could only be configured via Hyper-V Manager, PowerShell, or WMI, but now the feature has been extended to Azure, and you need a VMM to push a replica to the cloud.

  • Automatic Virtual Machine Activation (AVMA): This feature saves a lot of activation overhead for admins when it comes to activating product keys on individual virtual machines. AVMA allows a VM to be installed on a licensed virtual server and activates the VM when it starts. The supported operating systems on VMs for AVMA are Windows Server 2012 R2 Essentials, Windows Server 2012 R2 Standard, and Windows Server 2012 R2 Datacenter. Windows Server 2012 R2 Datacenter is required on the Hyper-V host for this function. This feature has a few use cases:

    • Virtual machines in remote locations can be activated

    • Virtual machines with or without an Internet connection can be activated

    • Virtual machine licenses can be tracked from the Hyper-V Server without requiring any additional access rights or privileges to the virtual machines

  • Shared virtual disks: With this exciting feature, admins may give iSCSI, pass-through, or even virtual SAN a miss. This feature, when added to a VHDX file, allows the file to be a shared storage for guest machine failover clustering.

  • Storage QoS: This is an interesting addition, wherein the admin can specify minimum and maximum loads for IOPS per virtual disk so that the storage throughput stays under check.

  • Linux support: Microsoft has put a lot of focus on building an OS independent virtual platform for hosting providers. Now, new Linux releases are Hyper-V aware, with Integration Services built in, and for older Linux platforms, MS has released LIS 3.5. This new IS allows a lot many feature additions for Linux VMs, which include dynamic memory, online VHD resize, and online backup (Azure Online Backup, SCDPM, or any other backup utility that supports backup of Hyper-V virtual machines).