Book Image

Microsoft Hyper-V Cluster Design

By : Eric Siron
Book Image

Microsoft Hyper-V Cluster Design

By: Eric Siron

Overview of this book

Table of Contents (19 chapters)
Microsoft Hyper-V Cluster Design
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Purposes for a Hyper-V Server cluster


There are several common reasons to group Hyper-V Server hosts into a cluster, but each situation is unique. Your particular purpose(s) will be the primary determinant in the goals you set for your cluster project. The following subsections will talk about the common purposes for building a cluster. Those that you include in your document should be specific to your environment. In a planning document, generic topics belong in the overview portion.

As you consider the technologies and solutions available in Hyper-V Server and Failover Clustering, remember that you are not required to utilize all of them. It's easy to get caught up in the flash and glamor of exciting possibilities and design a system that solves problems your organization doesn't actually have—usually with equally unrealistic price tags and time demands. This practice, infamously known as over-architecting, is a non-trivial concern. Whatever you build must be maintained into perpetuity, so don't saddle yourself, your co-workers, and your organization with complexities without a clear and demonstrable need.

High availability

One of the most common reasons to build a Hyper-V Server cluster is to provide high availability to virtual machines. High availability is a term that is often misunderstood, so ensure that you take the time to truly understand what it means and make sure you can explain it to anyone else who will be a stakeholder or otherwise involved in your cluster project. High availability is often confused with fault tolerance. A truly fault tolerant solution can handle the failure of any single component without perceptible downtime to the consumers of the service it is providing. Hyper-V Server has no built-in method to make the product completely fault tolerant. It is possible to leverage supporting technologies to provide fault tolerance at most levels, and it is possible to use Hyper-V Server as a component to provide fault tolerance for other scale-out technologies, but Hyper-V Server alone is not a fault tolerant solution.

In contrast to fault tolerance, Hyper-V Server provides high availability. This term means a few things. First, to directly compare it to fault tolerance, it provides for very rapid recovery after a major fault. Second, it grants the ability for planned moves of services without perceptible downtime for those services. The primary reason for this is planned maintenance of underlying hardware and supporting software. Of course, it can also be leveraged when a fault occurs but the system is able to continue functioning, such as when an internal drive in a RAID 1 array fails.

Note

Hyper-V's high availability features do not grant virtual machines immunity against downtime. Approaches that may provide application-level immunity will be covered in Chapter 11, High Availability.

The most important distinction between fault tolerant and high availability in Hyper-V Server is that if a failure occurs that causes a Hyper-V Server host computer to fail without warning, such as a blue screen error, all of its virtual machines will crash. The Failover Cluster component will immediately begin bringing failed virtual machines back online on surviving cluster nodes. Those virtual machines will act the same way a physical installation would if its power had been removed without warning.

The following image is a visualization of Hyper-V Server in a cluster layered on fault tolerant subsystems. Despite the fact that the cluster's constituent components have suffered a number of failures, the virtual machines are still running (although likely with reduced performance):

The subject of high availability will be explored more thoroughly later.

High Availability Printing

In Windows Server versions prior to 2012, you could create clusters specifically for the Windows print spooler service. While functional, this solution was never particularly stable. It was quite complicated and required a significant amount of hands-on maintenance. Print drivers provided by the hardware manufacturer needed to be specifically designed to support clustering, certain uses required administrative scripting, and problems were difficult to isolate and solve. In Windows Server 2012, Microsoft now defines High Availability Printing as a print spooler service running on a highly available virtual machine. You can no longer establish the print spooler itself as a clustered resource.

Balancing resources

The second most common reason to employ a Hyper-V Server cluster is to provide distribution of resources across multiple physical computers. When all of the nodes in a Hyper-V Server cluster are operational, you have the cumulative entirety of all their physical resources at your disposal. Even though the involved technology specifically mentions the word failover, it is possible to design the system in such a fashion that not all hosted resources can be successfully failed over. Virtual machines can be configured to prioritize the way they'll supersede each other when there is contention for limited resources.

When designing your cluster for resource balancing, there are two extremes. For complete high availability, you must have enough nodes to run all virtual machines on the smallest number of nodes that constitute a majority. For the highest degree of resource distribution, you must maximize the utilization of each node. In most cases, you'll gauge and select an acceptable middle ground between these two extremes. Your chosen philosophy should appear in the Goals section of your planning document. It's also wise to plan for additional virtual machines beyond those that will exist at initial deployment.

Geographic dispersion

With the increased availability of high speed public networking solutions across geographically dispersed regions, implementations of multi-site clusters are becoming more feasible. With a multi-site cluster, you can provide high availability even in the event of the loss of an entire physical site. These types of solutions are still relatively young and uncommon. Hyper-V Server does require a substantial amount of expensive supporting technology to make this possible, so ensure that you know all the requirements prior to trying to create such a system. These requirements will be discussed in greater depth in Chapter 9, Special Cases.

Natural replacement for aging infrastructure

Traditionally, organizations will purchase server hardware and software on an as-needed basis and keep it until it can no longer serve the purpose for which it was designed. A Hyper-V Server cluster is a natural place for their replacements to be created. Instead of provisioning new hardware to replace old equipment on a one-to-one basis, new hardware is only purchased when the capacity of an existing cluster is no longer sufficient.

Not only does employing a Hyper-V Server cluster slip nicely into the current hardware replacement stream, it can also completely reshape the way that hardware refreshes are handled. By decoupling hardware upgrades and replacements from software roles, an organization can upgrade software without waiting for a hardware refresh cycle. Without the dependency of software, the hardware can be replaced on any schedule that the organization desires; in some cases, and with careful planning, hardware can be upgraded without impacting hosted services. When a service impact is unavoidable, it is still likely to be substantially less intrusive than the normal physical-to-physical transition.

Test, development, and training systems

One of the defining features of a virtualized environment is isolation. Virtual machines are very effectively walled off from each other and the rest of your network unless you intentionally go through the steps to connect them. The ease of deploying new systems and destroying them once they've outlived their purpose is another key characteristic. Taken together, this facilitates a variety of uses that would be far more difficult in a physical environment. Of course, Hyper-V Server can provide this type of environment without a cluster. For some organizations, one or more of these roles are significant enough that they are just as demanding as production. For organizations at the opposite end who could never justify an entire cluster for such a purpose, the extra capacity extant in most clusters will almost undoubtedly provide enough room for a small test environment.

If you've never been in a position to be able to consider these uses before, you can create environments to test trial software releases without placing them on production systems. You can examine a suspicious application in an ephemeral sandbox environment where it can do no lasting harm. You can also duplicate a production system to safely test a software upgrade. You can even copy or emulate an end-user computer to train new users on a line-of-business application. Since all of these systems are as isolated from your live systems as you make them, the benefits provided by a testing environment and the ease with which a virtualization system can deliver it make this more of a strong point than might be obvious at first.

Cloud hosting

A term that has grown more rapidly in popularity than in comprehensibility is cloud. This term has so many unique definitions that they could be collected into a cloud of their own. With the 2012 release of server software products, Microsoft is pushing forward with the term and seems to be attempting to satisfy as many of the definitions as it can. One of the core technologies that they are pushing as a major component of their "cloud" solution is Hyper-V Server, especially when used in conjunction with Failover Clustering. Narrowing the scope of cloud to Hyper-V Server and Failover Clustering, what it means is that you can design an environment in which you can quickly create and destroy complete operating system environments as needed without being concerned with the underlying support structure. In order to create a true cloud environment using Microsoft technologies, you must also use System Center Virtual Machine Manager 2012 (SCVMM) with Service Pack 1 for a Hyper-V Server 2012 deployment or SCVMM 2012 R2 for a Hyper-V Server 2012 R2 deployment. With this tool, you'll be able to create these virtual machines without even being involved in which cluster node they begin life on. This nebulous provisioning of resources in an on-demand fashion and conceptually loose coupling of software and hardware resources is what qualifies Hyper-V Server as a component of a cloud solution.

Another aspect that allows Hyper-V Server to be considered a cloud solution is its ability to mix hardware in the cluster. As a general rule, this is not a recommended approach. You should strive to use the same hardware and software levels on every host in your cluster to ensure compatibility and smooth transitions of virtual machines. However, an organically growing cluster that is intended to function as a cloud environment can mix equipment if necessary. It is not possible to perform Live Migrations of virtual machines between physical hosts that do not have CPUs from the same manufacturer. Migrations between hosts that have CPUs that are from the same vendor but are otherwise mismatched may also present challenges to seamless migration. If your goals and requirements stipulate that extra computing resources be made available and some possible downtime is acceptable for a virtual machine that is being migrated, heterogeneous cluster configurations are both possible and useful.

Using Hyper-V Server to provide a cloud solution has two major strategies: public clouds and private clouds. You can create or expand your own hosting service that involves selling computing resources, software service availability, and storage space to end users outside your organization. You can provide a generic service that allows the end users to exploit the available system as they see fit, or you can choose to attempt to provide a niche service with one or more specialized pre-built environments that are deployed from templates. The more common usage of a Hyper-V Server cloud will be for private consumption of resources. Either usage supplies you with the ability to track who is using the available resources. This will be discussed in the following section.

Resource metering

A common need in hosted environments is the ability to meter resources. This should not be confused with measuring performance. The purpose of resource metering is to determine and track who is using which resources. This is commonly of importance in pay-as-you-go hosting models in which customers are only billed for what they actually use. However, resource metering has value even in private deployments. In a purely physical environment, it's not uncommon for individual departments to be responsible for paying for the hardware and software that is specific to their needs. One of the initial resistances to virtualization was the loss of the ability to determine resource utilization. Specifically in a Hyper-V Server cluster where the guest machines can travel between physical units at any time and share resources with other guests, it's no longer a simple matter of having a department pay for a physical unit. It also may not fit well with the organization's accounting practices to have just a single fund devoted to providing server hardware resources regardless of usage. Resource metering is an answer to that problem; usage can be tracked in whatever way the organization needs. These results can be periodically recorded and the individual departments or users can be matched to the quantity of resources that they consumed. This enables a practice commonly known as chargeback, in which costs can be precisely assigned to the consumer.

Hyper-V Server allows for metering of CPU usage, memory usage, disk space consumption, and network traffic. Third-party application vendors also provide extensions and enhancements to the basic metering package.

VDI and RemoteFX

Virtual Desktop Infrastructure (VDI) is a generic term that encompasses the various ways that desktop operating systems (such as Windows 8) are virtualized and made accessible to end-users in some fashion. VDI in Hyper-V Server is enhanced by the features of RemoteFX. This technology was introduced in the 2008 R2 version and provided superior video services to virtual desktops. RemoteFX was greatly expanded in Hyper-V Server 2012, especially when combined with Remote Desktop Services. A full discussion of these technologies is not included in this book, but if you intend to use them, they and their requirements must form a critical part of your planning and design. The hardware requirements and configuration steps are well-documented in a TechNet wiki article viewable at:

http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx

Be open to other purposes

The preceding sections outlined some of the most common reasons to build a Hyper-V Server cluster, but the list is by no means all-inclusive. Skim through the remainder of this book for additional ideas. Look through community forums for ways that others are leveraging Hyper-V Server clusters to address their issues.