Book Image

Instant Hyper-V Server Virtualization Starter

By : Vicente Eguibar
Book Image

Instant Hyper-V Server Virtualization Starter

By: Vicente Eguibar

Overview of this book

Doing more with less and making the most of what we have is the aim of virtualization. Although resources such as OS and printers are still shared, under the time-sharing model, in a virtualized environment individual virtual servers are isolated from each other, giving an illusion of multiple fully functional systems. Hyper-V server 2008 R2 provides software infrastructure and basic management tools that you can use to create and manage a virtualized server computing environment."Instant Hyper-V Server Virtualization Starter" will teach you the basics of virtualization, and get you started with building your first virtual machine. This book also contains tips and tricks for using Microsoft Hyper-V server 2008 R2.This book is a crash course on getting your virtualization infrastructure working, creating a virtual machine, and making it more robust by anticipating failures. You will also learn how to create a virtual network so that your virtual machines can communicate among themselves and the rest of the world. You will even learn to how to calculate the costs involved in your Microsoft Hyper-V virtualization solution.
Table of Contents (7 chapters)

Top features you need to know about


Microsoft © Hyper-V server 2008 R2 provides many features, which we will try to cover in the following sections; but unfortunately there are many aspects in the virtualization world, so we will focus on the most important in order for us to get the most out of it and start using it right away.

Capacity planning

Wikipedia defines capacity planning (http://en.wikipedia.org/wiki/Capacity_planning) as follows:

The process of determining the production capacity needed by an organization to meet changing demands for its products.

So is the total number of virtual machines, in our wonderful IT virtualization world, we can have with our current hardware.

As we defined in the So, what is Microsoft © Hyper-V server 2008 R2? section, a virtual machine is, among other things, a kind of partition or division of physical resources between the host server and the running virtual machines. Everything in this world has a limit, including virtual machines; our task is to identify that limit.

The first thing to bear in mind is that each virtual machine has to be dimensioned according to the role it is hosting, or the software running on it. In other words, if the VM is going to be a SQL server, we have to dimension it according to the database requirements in terms of CPU, RAM, disk, network, and so on. We should not treat the file server VM and a Microsoft Exchange Server 2010 VM in the same way. The four main subsystems to take care of are as follows:

CPU

Nowadays, CPUs can provide tons of power, which you can share between your VMs, but you still have to consider the number of running VMs and how CPU intensive those VMs are. If you plan to have a few VMs and the main purpose of have these is for lab testing, then a two-core CPU will do the trick, but if you have several VMs and those are CPU-intensive machines, then a four-core or six-core CPU must be used. For a good production environment, a two-socket server (server with two CPUs installed) will be more adequate. If you plan to virtualize your physical servers, it is recommended to gather the current CPU usage and take this value as an entry point for dimensioning your Hyper-V server.

The virtual machine will benefit from the number of virtual processors configured, having a limit of four per VM. You can find more details about this at Microsoft Technet (http://technet.microsoft.com/en-us/library/cc794868(WS.10).aspx).

RAM

The physically-installed RAM on the host server will mostly determine the number of VMs that can run simultaneously on the server. If we have 8 GB of physical RAM, it will be nearly impossible to execute two VMs of 4 GB RAM each. So we have to carefully plan the RAM distribution. As a rule of thumb we have to "reserve" at least 1 GB of RAM for the host (better if you assign 2 GB) and additional 32 MB for each virtual machine (again, better if we reserve 64 MB instead). The rest of the memory can then be assigned to the virtual machines.

Similar to how we did in CPU monitoring, we will use the RAM-monitored data in order to plan our new virtual machine. You can't imagine how often RAM is underutilized in our machines. By digging into our monitoring data, we can define the initial RAM needed and the growth progression.

Windows 2008 Server R2 provides new functionality that helps improve the RAM usage. This is called Dynamic Memory, and the main task is to grant or retrieve memory from VMs depending on how the memory is used at that moment. In order to use Dynamic Memory, the guest OS has to support such a technology (for example, Win 2003 R2 Service Pack 2 or higher, Win Vista Service Pack 1 or higher, Win 7 Service Pack 1 or higher or Win 2008 R2 Service Pack 1 or higher), and the Hyper-V integration services.

Disk

This is the subsystem that will introduce the most bottlenecks into our virtualization strategy. All VMs will demand IOps (Input/Output per second) and each disk has a limited number of these IOps, so they will have to be shared between the VMs, the host OS, and any other service configured on it. There are some techniques (as array of disks/RAID, or network attached storage/NAS to mention some) that will overcome these bottlenecks. The second issue we can find is space storage. It is very easy to store several GB on disk, and if we multiply by the number of apps times the number of VMs, we can easily end up with terabytes of information. As a bottom line, we have to plan for good shared performance with adequate storage space.

Again, monitoring data can be your ally when you are planning the storage space. By knowing the maximum IOps, your infrastructure can plan the growth, and you can assign current services to the infrastructure without pain.

Network

When we are using a 100-megabit NIC in our desktop, it is very unlikely that we will find a bottleneck there; but if we run several VMs over that, we can run out of bandwidth very easily. When designing your Hyper-V solution, decide how many LAN segments you will need (we will talk about this a bit more in the next section), the current network speed of each section, how many VMs will use the mentioned section, and the bandwidth usage of each VM. Ideally, you will have monitoring data, which will help you design your solution, but it is always a good idea to have a gigabit network segment (or several depending on your needs) and having your server network cards with the latest technology, so it can help you optimize network usage (as it may be TCP Chimney Offload, Jumbo Frames, network MTU to mention some). The simplest way to check your network usage is by using the Task Manager graph.

Virtual network

We've been speaking a lot about vSwitches, but until now we have created just one external switch without knowing all the benefits of it, and when to use it. We already know that there are three types of network, so let's go into it. A vSwitch can only be bound to one physical network at a time, but a VM can have more than one virtual NIC associated to a different vSwitch. For example, we can have a VM with two virtual NICs, one external for internet access and the other one internal for monitoring. There are many more networking options you can configure on virtual networks, such as VLAN tags or the MAC address range to use or even the specific MAC address for a VM, but for now we will focus on the types of network available and what benefits each one will grant us.

The EXTERNAL network

It is called EXTERNAL because it gives the same functionality as if it were physically connected to the LAN segment. In other words, this type of vSwitch is like the one you are physically connected to, and it provides the same benefits. So if you are planning to have your VM participate in your LAN segment as if it were another physical server, then this is your option.

By using this type of network, we have the option to enable or disable the Allow management operating system to share this network adapter checkbox. In case we only have a single NIC, then we must enable this option, otherwise we won't be able to manage the host OS; but if we have more than one NIC (as is the case in many servers), we can assign one NIC for management purposes and the others for different LAN access.

As you can see in the preceding diagram, the host server has three physical NICs connected to two different LAN segments, but one of the NICs is reserved for management purposes. The remaining two NICs are each bound to a vSwitch respectively (red vSwitch and green vSwitch respectively). The red VM1 and green VM3 are configured as a single vNIC to the corresponding vSwitch, but the VM2 has both vNICs, so it has access to both vSwitches. In topologies where we have a plain network with a single LAN segment, the idea of having several vSwitches may not make too much sense, but when we are dealing with much more complex topologies, then we have to accommodate our vSwitches, so that the right LAN segment access is available within the virtualization server.

The INTERNAL ONLY network

The internal network is for the VMs to communicate with each other and with the host. In other words, communication is possible between all VMs which use this switch, and the host. Because of the nature of these networks, you don't need to assign a physical NIC to it. These types of networks are used to provide a certain level of isolation between them; they are great for labs.

Tip

Access external networks using the INTERNAL vSwitch

Create the INTERNAL vSwitch, which will help facilitate the communication between VMs and the host, and because you can't assign a physical NIC to it, then we will enable Internet Connection Sharing on the physical NIC that we want to use as a gateway for communication (this is done by selecting the properties of the physical NIC and on the Sharing tab, select the Allow other network users to connect through this computer's internet connection checkbox and select the previously-created INTERNAL vSwitch there.

The physical NIC will act as a Network Address Translator (NAT) gateway and will act as DHCP for that vSwitch (In my laptop, I always get the 192.168.137.x IP address, and the default gateway and DNS is 192.168.137.1). You can modify the physical NIC routing table in order to access those VMs from the LAN.

In the following diagram, we can see the red vSwitch as an EXTERNAL vSwitch, and the green vSwitch as INTERNAL, but having the yellow physical NIC sharing the connection. Also the VMs are accessing their correspondingly-colored networks.

The PRIVATE VIRTUAL MACHINE network

Finally, the private VM network. This will only provide communication between the VMs. That is, only VMs using this vSwitch will have communication between them, not even with the host. This is the most isolated type of network, very good when all the servers and services live on the same vSwitch, but are completely isolated outside this boundary.

For more information, you can visit John Howard's blog at http://blogs.technet.com/b/jhoward/archive/2008/06/17/hyper-v-what-are-the-uses-for-different-types-of-virtual-networks.aspx.

Virtual disk and snapshot management

As with any other machine, a VM needs some storage space to function. The Virtual Hard Drive or VHD is the virtual piece that will grant such a storage space. Although VHD is not the only disk type, it is the most common and the only type that supports snapshots (which we will discuss right after finishing this topic).

The disks

There are four types of disks that we can use for our VMs, and we have to carefully plan in advance depending on what we want to achieve with each one of them.

  • Fixed. This file file-based disk is created by using the maximum capacity, which is configured when is created. In other words, when we create the disk with a maximum size of 100 GB, Hyper-V will create a VHD file of 100 GB on the physical disk. And depending on the size and the disk performance, it can take a long time until the disk is created. These disks are quite portable and only depend on the copying process from one host server to another.

  • Dynamic. In contrast to the fixed disk, this disk is created with a minimum space used, and will grow in size whenever new data is copied into it. These disks are almost instantaneously created. The disk will continue growing until it reaches the maximum configured size. This concept is also known as thin provisioning. These disks are the most flexible, because to copy or move them to another server, we only need to transfer the real data stored inside, which makes the overall process faster.

  • Differential. We consider this as a child disk, because it depends on a parent or master disk. These kinds of disks are intended to store the new changes from the parent. In other words, you start with the master disk, and any write/delete operation that is done on the master is stored on the differential disk instead. These disks are automatically created when we create a snapshot on the VM (snapshot technology will be discussed in a while, be patient).

The fixed and the dynamic disks are the only disk types (having a differential disk as a consequence) that support snapshots. By default, these disks are created in C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks, but you can define where to store the file when creating the disk; be sure to store these files in the correct physical disk to avoid future storage problems. Deciding to choose between fixed and dynamic disks is a matter of tradeoff, because the fixed disk will provide a slightly better performance, but it will use all the space configured.

On the other hand, the slightly slower dynamic disk will not consume all the configured space, but over time it will continue growing, and if we are not carefully monitoring our disks, it will eat up all the available space in the physical disk. So one option will take over all the disk space even if the virtual disk is not using it, and the other one will use it progressively with the risk of running out of space if we are not carefully monitoring it and taking action in advance.

VHD files can only be stored on NTFS volumes without enabling compression, and the maximum size is 2 Terabytes. When you create a new disk, the wizard will ask you what type of disk you want to create.

And the last consideration, but not the least important, is security. Because these types of disk are file-based, they can be copied into another disk, or another media as it can be an external USB disk. So protecting these files accordingly is a must in order to avoid unauthorized copying of data.

Pass-through. This is how it is called when the virtual machine is directly using a physical disk. These types of disk provide the best performance (very significant when comparing average reads and writes, but very little difference when comparing IOps), but as a tradeoff do not support snapshots and it is quite complicated to migrate or move them. As a matter of fact, it takes the same effort to move or copy these types of disks as a physical disk. It is a physical disk, but a VM is accessing it instead of the physical host.

You may be wondering why we want to use a pass-through disk if we have a fully-featured and a very portable VHD; well, in some cases we do not need such a functionality, and we can achieve portability by using different technologies. For example, it's recommended that our virtualized Active Directory Domain Controllers be installed in pass-through disks, because AD object replication can be severely damaged by a snapshot (as Microsoft recommends on this Tech Net article at http://go.microsoft.com/fwlink/?LinkId=159631). But as mentioned earlier, portability is almost eliminated from these kind of disks; well, not fully true. For example, if we use the pass-through disk as an iSCSI disk (we will discuss it further in the upcoming Reliability and Fault Tolerance section, but as a precis is a disk stored on a Network Attached Storage (NAS) by using a Block Access technology standard internet SCSI), which can be seamlessly disconnected from one server and connected to another without too much effort and with very little downtime, we can achieve a similar degree of portability.

Snapshots

Snapshots can be considered as a mechanism to travel back in time, IT time to be more precise. When a snapshot is taken, Hyper-V stores all the VM information and any change done after this point can be reverted. For example, we install Windows 7 with all its service packs and hot fixes, even install an antivirus and other software, and if we wish to "mark" this point in time, we take a snapshot. Later on we install all kinds of software and tools, but the first part of the installation of our Windows 7 still remains in the "parent" disk, so if we decide to go back to a previous state, we should only revert to the desired snapshot. Very simple, isn't it? But, how is this done? Well, it is done by using VHD files; the moment we take a snapshot, Hyper-V creates a differential disk (a file-based disk with an AVHD extension) where any change is written to.

So if we know what we are doing, taking a snapshot is only a matter of click and sing. But we do need to know what is going on when dealing with snapshots. As we already mentioned, when a snapshot is taken, the VHD file becomes a read-only parent, and a differential AVHD child file is created, so guess which file will start using the storage space? Correct, the differential file. Now we can start considering the implications of a snapshot:

  • Performance: After the first snapshot is taken, two linked files, acting as a single virtual disk will exist, and each time a snapshot is taken, a new linked child of the child file will appear, leading to reduced disk performance.

  • Storage space: Taking snapshots may rapidly consume all your hard drive space without any previous notice (unless you are a good admin and have set up monitoring alerts).

  • Recovering space from snapshots: When you delete a snapshot (always from the Hyper-V management console, NEVER the file from explorer or cmd), storage space remains used until you shut down, turn off, or put the virtual machine into a saved state. At this point, Hyper-V will merge the file into the VHD and release the space used by the snapshot (differential file). This can be quite time consuming depending on the amount of data to merge.

  • Never take a snapshot when:

    • The service is time-based as can be Kerberos authentication, Active Directory, or any similar.

    • A potential performance problem exists (mainly on storage subsystems)

    • Running short on storage space

    • Alternative to a backup solution.

Now you should be asking, then why is this snapshot technology so marvelous if it has so many implications?

Well, it is a wonderful rollback tool for productive environments. Imagine you have your customer database server, and you need to apply a new software release on it, but you haven't got time to try it first on your Quality Assurance infrastructure (as a joke, but yes, we should not run without this), so you create a snapshot and proceed installing with the update. If everything went correctly, you just delete the snapshot and continue working as usual, but if the dam update breaks the critical customer database server, then you should not panic, and just revert to your snapshot; this will leave your server exactly as it was before the update.

You can use a snapshot as a dirty trick to change the storage used by some VMs (be careful when doing this, try it several times on your lab before messing with your servers).

It may also be used on your lab (I do have it like this in my laptop) to create a "master" disk with Windows 2008 that is ready to use, and any new server you create is using a differential disk, thereby saving space on your disk.

In the same way as we will not use a sport car as a truck, we will use also snapshots in the right scenario. Any snapshot taken of a certain VM will be created as a "tree" (similar to the tree of the file explorer) based on the time it was taken, so the latest one will be at the end of the tree and so on. Take special note that when deleting a higher node of the snapshot tree, all the children belonging to it will be deleted as well.

And finally the details. To take a snapshot, just click on the desired VM on the Hyper-V console, right-click on it and select Snapshot. This will generate a snapshot and it will be named <Virtual Machine Name> - <Current Date and Time>. By selecting Apply, the machine will be "traveling in time" and will end up exactly as it was when the selected snapshot was taken; all data after the snapshot date will be lost. To change the name of the snapshot, or to add a comment, select the Rename menu. As the name says, Delete Snapshot will delete the snapshot, in other words, there is no chance to return the VM to its previous state.

Once we delete the snapshot, file merging is pending until the VM is stopped, shut down, or in a saved state. And finally the Delete Snapshot Subtree option (these options are shown in the following screenshot) will delete the selected snapshot and the child hierarchy depending on it:

Making virtual machines portable

We are going to face situations where we will need to move our virtual machines from one server to another. In order to do this, we need to make our VM ready for the process. The first question we all have is, what will happen if we only copy the VHD file to the destination server? Well, this will retain all of the data stored in the disk, but we will be missing the VM configuration (VM name, CPU, RAM, NIC, and so on). Nothing to worry about, but additional work to perform because we will need to recreate the virtual machine.

One way or another, we have to copy the VHD file over to the destination server (remember that we can only do this if we have VHD files, either fixed, dynamic, or differential. It is not possible to do this task when using pass-through disks), but we want to save the task of recreating every single VM.

The best approach is to select Export from the virtual machine's contextual menu, and in the new window just provide the destination path to where we want to save it. But be aware that in order to export any VM, it has to be powered off or should be in a saved state. If the VM is running, there is no chance we can export it.

Exporting a VM is very straightforward, but what exactly happens in the background? We see almost nothing in the process, except for the Status column on the Hyper-V Management Console, which will be showing the export progress, until it reaches 100 percent and then disappears. This is very good because it means no errors were found and your export is complete. At this point we have an untouched VM ready to rock'n roll and a clone of it stored on the path you provided (don't forget to start your VM again). In this destination path we can find three folders (snapshot, virtual hard disk, and virtual machine) and an XML file having the configuration data. Now let's check the taxonomy of the export: the snapshot folder will have any snapshot that the VM had, if any existed; the virtual hard drive folder will have the VHD files used by the VM, exactly as they were in the original; and the virtual machine folder will have a quite long hexadecimal name with EXP extension. These wired files and folders are your VM once exported, including the friendly name you provided to the VM during its creation as a property inside the file. Hyper-V identifies each VM by using hexadecimal names (similar to the Global Unique Identifier or GUID) Note that this export is an exact clone of the original one, so importing and running this VM on the same LAN/Segment will create conflicts.

Importing a VM is also a very simple process. We select the Import menu from the right-hand side pane of the Hyper-V management console in the destination server. This will bring up the following window:

The import settings can be used to move or copy the selected virtual machine. If you are moving the VM, the unique identifier of the VM will be re-used (very good when the original VM will be deleted), or you can create a new ID by copying the VM. Unless you know exactly what are you doing, it is better and safer to select the copy option. The Duplicate all files… checkbox will make a copy of everything stored in the location provided, having all these files intact so we can re-import it again and again.

If this option is not selected, the current folder will be the folder used by the VM and it will not be possible to import it again without the need to export it once more. The taxonomy of the folder will remain almost the same, except for the EXP files, which will be changed to XML files and the config.xml file will be deleted.

Providing access to Hyper-V

We all know that in order to gain access to a server, we have the RDP protocol, which is better known as Remote Desktop (or Terminal Services admin mode). We just look for the Remote Desktop shortcut (or look for the MSTSC.exe file) and provide the server name or IP address and voila! We have access to the server (or any individual virtual machine). Even if RDP is a great technology, in many circumstances it is not the best tool to manage our servers. Connecting to host servers by using RDP, and then connecting to the VM through the Hyper-V management console (right-click the VM and select Connect) is not the best idea. Another example is when your host server is installed as Core (Windows 2008 Server R2 without graphical user interface) or when using the free Windows Hyper-V Server 2008 R2. Instead of opening RDP sessions, you would like to have the Hyper-V Management Console installed directly on your own local client and manage the host server and the VMs directly from there.

In order to remotely manage our server, we have to take care of the following:

  • Installing the management tools on your client

  • Enabling the corresponding rules in the Firewall (both host and client)

  • Checking if the host server and/or the client are in a workgroup environment

The first one is very simple, just download the Remote Server Administrative Tools (RSAT) corresponding to your OS, depending on whether you have Win Vista, Win 7, or Win 8 and the right architecture, either 32 bits or 64 bits (just go to the Microsoft Download Center web page and look for your corresponding download). Note that Microsoft does not provide RSAT tools for Windows XP; in this case you should buy a third-party tool. Once you have downloaded and installed the tool, you should go to the Control Panel, select Programs | Features | Turn Windows Features On or Off. From there you can select the tools you want to have available. Now we need to perform some additional steps before we can successfully execute the Hyper-V Management Console.

The next thing to take care is of having a TCP/UDP communication from and to the server. Older Windows versions had a very simple and easy-to-manage firewall, but Windows 2008 Server has a very complete firewall, which we should correctly configure in order to remotely manage our Hyper-V without losing security. We could run this configuration by using the Firewall GUI, but there is a simpler way, use the NETSH.exe command-line tool. Open a new command window (or cmd window) as an Administrator and enter the following:

netsh advfirewall firewall set rule group="Windows Management Instrumentation (WMI)" new enable=yes

We should get an answer saying four rules were updated (three inbound and one outbound, but this may vary depending on which client OS you are using). We can confirm this by playing a little bit with the Windows Firewall with Advanced Security option in the Windows Administrative Tools. This command has to be executed on each of the Hyper-V servers and on any client you are planning to use for Hyper-V administration. There are some other firewall configurations you would like to implement. In case we haven't enabled the Remote Desktop rule, we should execute the following NETSH command from the command prompt:

netsh advfirewall firewall set rule group="Remote Desktop" new enable=yes

Or to enable all of the firewall rules for remote management of the server, use the following:

netsh advfirewall firewall set rule group="Remote Administration" new enable=yes

Or if you are planning to remotely manage the host server disk volumes by using the MMC, just execute the following:

Netsh advfirewall firewall set rule group="Remote Volume Management" new enable=yes

And last, but not least, we have to consider the domain membership (or the lack of it) of the computers running the Hyper-V role and the ones used to administer it. Here we can find the following four scenarios:

  • The Hyper-V host server and the client management computer belong to an Active Directory domain and are fully trusted

  • The Hyper-V host server is a domain member server, but the management computer is in a workgroup

  • The management computer is joined to a domain, but the Hyper-V host server is in a workgroup

  • Both the Hyper-V host server and the management client are in a workgroup

Following this order, the easiest and most secure setup is when both machines are joined to an Active Directory domain, and the preceding described configuration is considering this scenario.

John Howard, a Senior Program Manager in the Hyper-V team at Microsoft Corporation has created a very powerful tool (HVRemote, which can be found at http://archive.msdn.microsoft.com/HVRemote), which can help us to fully configure our server in a few steps for the mentioned scenarios. You can find his blog at http://blogs.technet.com/b/jhoward/archive/2008/03/28/part-1-hyper-v-remote-management-you-do-not-have-the-requested-permission-to-complete-this-task-contact-the-administrator-of-the-authorization-policy-for-the-computer-computername.aspx.

Reliability and fault tolerance

By placing all the eggs in the same basket, we want to be sure that the basket is protected. Now think that instead of eggs, we have virtual machines, and instead of the basket, we have a Hyper-V server. We require that this server is up and running most of the time, rendering into reliable virtual machines that can run for a long time.

For that reason we need a fault tolerant system, that is to say a whole system which is capable of running normally even if a fault or a failure arises. How can this be achieved? Well, just use more than one Hyper-V server. If a single Hyper-V server fails, all running VMs on it will fail, but if we have a couple of Hyper-V servers running hand in hand, then if the first one becomes unavailable, its twin brother will take care of the load. Simple, isn't it? It is, if it is correctly dimensioned and configured. This is called Live Migration.

In a previous section we discussed how to migrate a VM from one Hyper-V server to another, but using this import/export technique causes some downtime in our VMs. You can imagine how much time it will take to move all our machines in case a host server fails, and even worse, if the host server is dead, you can't export your machines at all. Well, this is one of the reasons we should create a Cluster.

As we already stated, a fault tolerant solution is basically to duplicate everything in the given solution. If a single hard disk may fail, then we configure additional disks (as it may be RAID 1 or RAID 5), if a NIC is prone to failure, then teaming two NICs may solve the problem. Of course, if a single server may fail (dragging with it all VMs on it), then the solution is to add another server; but here we face the problem of storage space; each disk can only be physically connected to one single data bus (consider this the cable, for simplicity), and the server must have its own disk in order to operate correctly. This can be done by using a single shared disk, as it may be a directly connected SCSI storage, a SAN (Storage Area Network connected by optical fiber), or the very popular NAS (Network Attached Storage) connected by NICs.

As we can see in the preceding diagram, the red circle has two servers; each is a node within the cluster. When you connect to this infrastructure, you don't even see the number of servers, because in a cluster there are shared resources such as the server name, IP address, and so on. So you connect to the first available physical server, and in the event of a failure, your session is automatically transferred to the next available physical server. Exactly the same happens at the server's backend. We can define certain resources as shared to the cluster's resources, and then the cluster can administer which physical server will use the resources. For example, consider the preceding diagram, there are several iSCSI targets (Internet SCSI targets) defined in the NAS, and the cluster is accessing those according to the active physical node of the cluster, thus making your service (in this case, your configured virtual machines) highly available. You can see the iSCSI FAQ on the Microsoft web site (http://go.microsoft.com/fwlink/?LinkId=61375).

In order to use a failover cluster solution, the hardware must be marked as Certified for Windows Server 2008 R2 and it has to be identical (in some cases the solution may work with dissimilar hardware, but the maintenance, operation, capacity planning, to name some, will increase thus making the solution more expensive and more difficult to possess). Also the full solution has to successfully pass the Hardware Configuration Wizard when creating the cluster. The storage solution must be certified as well, and it has to be Windows Cluster compliant (mainly supporting the SCSI-3 Persistent Reservations specification), and is strongly recommended that you implement an isolated LAN exclusively for storage purposes. Remember that to have a fault tolerant solution, all infrastructure devices have to be duplicated, even networks. The configuration wizard will let us configure our cluster even if the network is not redundant, but it will display a warning notifying you of this point.

Ok, let's get to business. To configure a fault tolerant Hyper-V cluster, we need to use Cluster Shared Volumes, which, in simple terms, will let Hyper-V be a clustered service. As we are using a NAS, we have to configure both the ends—the iSCSI initiator (on the host server) and the iSCSI terminator (on the NAS). You can see this Microsoft Technet video at http://technet.microsoft.com/en-us/video/how-to-setup-iscsi-on-windows-server-2008-11-mins.aspx or read the Microsoft article for more information on how to configure iSCSI initiators at http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx. To configure the iSCSI terminator on the NAS, please refer to the NAS manufacturer's documentation. Apart from the iSCSI disk configuration we have for our virtual machines, we need to provide a witness disk (known in the past as Quorum disk). This disk (using 1 GB will do the trick) is used to orchestrate and synchronize our cluster.

Once we have our iSCSI disk configured and visible (you can check this by opening the Computer Management console and selecting Disk Management) in one of our servers, we can proceed to configure our cluster.

To install the Failover Clustering feature, we have to open the Server Manager console, select the Roles node on the left, then select Add Roles, and finally select the Failover Clustering role (this is very similar to the procedure we used when we installed the Hyper-V role in the Requirements and Installation section). We have to repeat this step for every node participating on the cluster.

At this point we should have both the Failover Clustering role and the Hyper-V role set up in the servers, so we can open the Failover Cluster Manager console from the Administrative tools and validate our configuration. Check that Failover Cluster Manager is selected and on the center pane, select Validate Configuration (a right-click can do the trick as well). Follow all the instructions and run all of the tests until no errors are shown. When this step is completed, we can proceed to create our cluster.

In the same Failover Cluster Manager console, in the center pane, select Create a Cluster (a right-click can do the trick as well). This wizard will ask you for the following:

  • All servers that will participate in the cluster (a maximum of 16 nodes and a minimum of 1, which is useless, so better go for two servers):

  • The name of the cluster (this name is how you will access the cluster and not the individual server names)

  • The IP configuration for the cluster (same as the previous point):

We still need to enable Cluster Shared Volumes. To do so, right-click the failover cluster, and then click Enable Cluster Shared Volumes. The Enable Cluster Shared Volumes dialog opens. Read and accept the terms and restrictions, and click OK. Then select Cluster Shared Volumes and under Actions (to the left), select Add Storage and select the disks (the iSCSI disks) we had previously configured.

Now the only thing we have left, is to make the VM highly available, which we created in the Quick start – creating a virtual machine in 8 steps section (or any other VMs that you have created or any new VM you want to create, be imaginative!). The OS in the virtual machine can failover to another node without almost no interruption. Note that the virtual machine cannot be running in order to make it highly available through the wizard.

  1. In the Failover Clustering Manager console, expand the tree of the cluster we just created.

  2. Select Services and Applications.

  3. In the Action pane, select Configure a Service or Application.

  4. In the Select Service or Application page, click Virtual Machine and then click Next.

  5. In the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available, and then click Next.

  6. Confirm your selection and then click Next again.

  7. The wizard will show a summary and the ability to check the report.

  8. And finally, under Services and Applications, right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it.

Integrating the virtual host

When we speak of integrating the virtual machine, what we mean is the fact that the host server is able to communicate directly to the virtual machine and the other way around. This internal communication has to be reliable, fast, and secure.

The hypervisor provides a special mechanism to facilitate this communication—the Hyper-V VMBus. As the name states, it is a dedicated communication bus between the parent partition and the child partition, or following the naming convention on this book, the host server and the virtual machines, which provides a high speed, point-to-point secured communication.

But what about the virtual machine? Well, as the VMBus is the Hyper-V part, we also need the client part. As you may expect, the component to facilitate such communication on the guest VM is a set of drives called Integration Services. In other words, a set of agents that is running inside our virtual machine, and communicates with the Hyper-V host in a secure and fast way. Once the OS in the virtual machine has installed these components, it becomes aware that it is a virtual partition and it can organize itself with the host beneath. With that said, you may be wondering, how is this going to be valuable for me? Well, consider a small example. If the tools are installed on the virtual machine, we could send a Shut Down message from the Hyper-V management console and the VM will shut down as if we were shutting it down from the desktop.

Now that we already know what the integration topic is about, we can talk about the two types of device drivers that Hyper-V provides—emulated and synthetic. As the name suggests, the emulated drivers are a very basic way of providing the service, in other words, they translate every request and move it through the VMBus until it reaches the hypervisor. The synthetic drivers do not have to perform any translation; they just act as the main gate to the VMBus until reaching the physical device on the host.

Before you ask, the emulated drivers exist to provide basic functionalities to every guest OS installed on the VM. In the initial setup stage of our VM, we do need some kind of device driver (as it may be display or network). You can think of emulated drivers as those cheap flip-flops that you can find almost anywhere at the beach. Those flip-flops are neither comfortable nor fancy (they don't even last long), but they will fit almost everybody thereby fulfilling their goal. Of course you want to change from using such an uncomfortable flip-flop to a more comfortable shoe, which fits perfectly to your feet, looks nice, and can even help you run with it. Well, think of that shoe as a synthetic drive.

As you may have already guessed, synthetic device drivers are not available for all OSes, but are only available for a more select group. Linux fans, don't worry, there are synthetic drivers for some of the most common distributions. The drivers for Linux can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=11674 and are intended for Red Hat Enterprise, CentOS, and SUSE Linux Enterprise. For a complete list of supported OSes (Linux and Microsoft), you can visit the Microsoft Technet site at http://technet.microsoft.com/en-us/library/cc794868(v=ws.10).aspx.

We are just one topic away from the installation of these drivers. But first we will describe in more detail what these drivers do:

  • VM connection enhancements. If we connect to a machine without integration services, the mouse pointer will get trapped inside the VM, and we will have to use a key combination (by default Ctrl + Alt + left arrow) to release it. This enhancement will make the VM window behave as any other window.

  • Drivers with Hyper-V knowledge: Remember the synthetic drivers. This is another name for them.

  • Time Synchronization service: A mechanism to maintain time synchronization between the host and the guest. Because the guest has no BIOS battery, it uses the host clock to synchronize.

  • Heartbeat service: The host sends heartbeat messages at regular intervals, waiting for a heartbeat answer from the guest VM. If any of these messages are not answered, then Hyper-V considers that virtual machine as having a problem and logs any such as an error.

  • Shut down service: As mentioned earlier, a graceful shutdown of the VM without the need to log in and manually shut it down.

  • Volume Shadow-Copy requestor: Provides an interface to the Volume Shadow Copy or VSC service in order to create shadow copies of the volume, but only when the OS supports it.

  • Key/Value Pair Exchange: A set of registry keys used by the VM and the host to identify and obtain information. The host can directly access and modify such keys. You can see these values in the Hyper-V management console (in the VM properties).

The Integration drivers are fully supported on the following OSes. For other OSes not mentioned here, not all services and/or features may be available:

Operating system

Supported services

Windows Server 2008 SP1 and Windows Server 2003 SP2

32-bit and 64-bit versions.

Time Synchronization, Heartbeat, Shutdown, Key/Value Pair Exchange, VSS

Windows Vista SP1 and Windows 7 SP1

32 and 64 bit versions.

Time Synchronization, Heartbeat, Shutdown, Key/Value Pair Exchange, VSS

Windows 2000 Server and Advanced Server SP4

Time Synchronization, Heartbeat, Shutdown, Key/Value Pair Exchange

Windows XP SP2/SP3

32 and 64 bit versions.

Time Synchronization, Heartbeat, Shutdown, Key/Value Pair Exchange

Now the main dish. Installing Integration services is quite simple. The VM has to be running, and we have to connect to it using the Connect…option from the Hyper-V Manager console. From there we have to select the Action menu and select Insert Integration Services Setup Disk, as shown in the following screenshot:

Depending on whether the CD-ROM autoplay feature is enabled or not, you may get a pop-up window asking to execute the inserted media. If you do, select the install option. If autoplay is not enabled, browse your CD-ROM and manually execute the Integration Services setup file.

A progress window will show how the components being installed on the virtual machine. Once finished, it will show a window asking for a reboot to complete the operation, as shown here:

After the virtual machine is rebooted, the drivers will start working and your machines (both the hypervisor and the virtual) will be integrated. To review the configuration or to make changes, we can see the Services option from Control Panel on the virtual machine.

Or from the Hyper-V Manager console, select the corresponding virtual machine, right-click on it and select Settings... on the right pane of the settings window, click on Integration Services.

How much will it cost?

By now you should have a clear idea of what virtualization is, why it's so popular, and many other nice features. But we live in a business world, and because of this we will be facing the moment when we are asked: how much is this going to cost? And if you are like me, preferring nice toys such as Hyper-V rather than playing with numbers and ROI calculations, you will try to avoid it. Sorry to say it, but any effective economic quote must start with people like us.

The Return of Investment (ROI) is in simple terms the profit we will make in certain periods related to the investment. Nowadays everybody wants to increase their ROI. This can be accomplished by introducing new technologies that make our life easier, by consolidating and reducing the infrastructure, or by simplifying the administration. Our challenge is to identify such investments and treat these numbers, so that we can present the ROI in different ways. Don't misunderstand me when I say treat those numbers, it's not a manipulation but a different understanding instead. We have the hard costs of our solution, and understand that the hard costs are hardware or software or anything that is easily accounted for, most of time by a single invoice. And then you have the infamous soft costs, which can be as simple as how many watts my server is using or as complicated as the percentage of operational cost (including help desk) that one single window's server uses.

There are many ways to calculate these things, but the procedure used may vary from company A to company B, because what it is important to A may not be useful to B and vice versa. You may be wondering, how should I calculate this? And how do I know if it is correct or not? Well, if you do it and whoever you report to understands and agrees on it, then it is correct. As you already figured out, in this section we are not going to go through this exercise, but instead give you a good baseline to start this task according to your own environment.

Let's start with the calculations. Hardware investment. In a typical (or call it physical) environment, you buy a CPU (to name one component) that is capable of delivering its power to the application on top, even if the application is in a slow state. Simply put, you bought a CPU that is being utilized on an average of 25 percent (or 35 percent or any other percentage low below the full utilization of the chip). By virtualizing, you may share the CPU load over the configured virtual machines, making for a much better utilization; the challenge is to assign an economic value to both scenarios and compare them. The hardware cost is not assigned to a single server (like if you have your dedicated database server for HR), but to each single virtual machine running on that hardware.

Continuing with the calculations, we have to take care of housing (that is the required facilities to have the servers, the computer room, the air conditioner to maintain our devices, the electricity used, the cabling, and so on), which can be very simple in case we have a single closet or very complex if we have a dedicated room. As a rule of thumb, we will consider the devices + setup cost + running expenses divided between every service provided. As we are speaking about virtualization, a single server may host several virtual machines, so the calculated cost will decrease where we have more VMs, even if the host servers are bigger and more powerful (and likely more expensive).

Then we have the software cost. Traditionally, we need one OS license per physical server, or 100 in case we have 100 servers, but Microsoft has developed a very interesting licensing scheme.

In case we are deploying any other virtualization technology, we have to buy a license per each VM, no matter what. If we are planning to deploy Windows Hyper-V server 2008 (which by the way is a free license), we still need to have an OS license per VM. But for the remaining three versions of Windows 2008 Server (Standard, Enterprise, and Datacenter), we have a nice deal, while they are only installed and dedicated as Hyper-V hosts (yes, unfortunately we cannot even install a simple DNS on the host).

For the Standard version, we have included one OS license for a VM (say we have one for the host OS plus one for a VM); for the Enterprise we have four OS licenses for VMs (again one for the host plus four for the VMs), and for the Datacenter… mmm, I even get nervous… we have unlimited licenses (well, this is a lie, because by design we have a limit of 384 VMs per node).

The bottom line is that choosing the wrong brand will bring no savings; choosing the right brand and the right version will lower down the licensing cost significantly (Hyper-V free plus 30 VMs at the cost of, let's say $200, is a total of $6000, but Datacenter plus 200 VMs is only the cost of Datacenter, even if this is $5000).

Dealing with licenses is always a bit of chaos, so it is strongly recommended to call your local Microsoft representative, who will be pleased to help you in your licensing journey.

And last but not least (and the biggest cost within our IT service) - the manpower. Imagine you and your colleagues having to visit the server on the second floor to reboot a device, and then go running to the sixteen floor because the server is not accessible by network. With our new Hyper-V infrastructure we will not face this (or if we do, we just have to run to a single place where the servers reside) because of the consolidated state of our machines. We will be optimizing our IT support even if we add more VMs.

Don't trust my own calculations, or even the Microsoft ones which claim that their solution is six times cheaper than their competitors. Take a pen and a piece of paper and create your own number, I'm completely sure you will love the final result, and so will your boss!

And that's it!!

By this point, you should have a fully working virtual machine running on Microsoft © Hyper-V server 2008 R2.