Book Image

Instant Hyper-V Server Virtualization Starter

By : Vicente Eguibar
Book Image

Instant Hyper-V Server Virtualization Starter

By: Vicente Eguibar

Overview of this book

Doing more with less and making the most of what we have is the aim of virtualization. Although resources such as OS and printers are still shared, under the time-sharing model, in a virtualized environment individual virtual servers are isolated from each other, giving an illusion of multiple fully functional systems. Hyper-V server 2008 R2 provides software infrastructure and basic management tools that you can use to create and manage a virtualized server computing environment."Instant Hyper-V Server Virtualization Starter" will teach you the basics of virtualization, and get you started with building your first virtual machine. This book also contains tips and tricks for using Microsoft Hyper-V server 2008 R2.This book is a crash course on getting your virtualization infrastructure working, creating a virtual machine, and making it more robust by anticipating failures. You will also learn how to create a virtual network so that your virtual machines can communicate among themselves and the rest of the world. You will even learn to how to calculate the costs involved in your Microsoft Hyper-V virtualization solution.
Table of Contents (7 chapters)

So, what is Microsoft © Hyper-V server 2008 R2?


Welcome to the world of virtualization. On the next pages we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let's start.

The concept of virtualization is not really new; as a matter of fact it is in some ways an inheritance of the mainframe world. For those of you who don't know what a mainframe, is here is a short explanation: A mainframe is a huge computer that can have from several dozen up to hundreds of processors, tons of RAM, and enormous storage space. Think of it as the super computers that international banks are using, or car manufacturers, or even aerospace entities.

These monster computers have a "core" operating system (OS), which helps in creating a logical partition of the resources to assign it to a smaller OS. In other words, the full hardware power is somehow divided into smaller chunks that have a specific purpose. As you can imagine, there are not too many companies which can afford this kind of equipment, and this is one of the reasons why the small servers became so popular. You can learn more about mainframes on the Wikipedia page at http://en.wikipedia.org/wiki/Mainframe_computer.

Starting in the 80s, small servers (mainly based on Intel© and/or AMD© processors) became quite popular, and almost anybody could buy a simple server. But mid-sized companies began to increase the number of servers. In later years the power provided by new servers was enough to fulfill the most demanding applications, and guess what, even to support virtualization.

But you will be wondering, what is virtualization? Well the virtualization concept, even if a bit bizarre, is to work as a normal application to the host OS, asking for CPU, memory, disk, network, to name the main four subsystems, but the application is creating hardware, virtualized hardware of course, that can be used to install a brand new OS. In the diagram that follows, you can see a physical server, including CPU, RAM, disk, and network. This server needs an OS on top, and from there you can install and execute programs such as Internet browsers, databases, spreadsheets, and of course a virtualization software. This virtualization software behaves the same way as any other application-it sends a request to the OS for a file stored on the disk, access to a web page, more CPU time; so for the host OS, is a standard application that demands resources. But within the virtualization application (also known as Hypervisor), some virtual hardware is created, in other words, some fake hardware is presented at the top end of the program.

At this point we can start the OS setup on this virtual hardware, and the OS can recognize the hardware and use it as if it were real.

So coming back to the original idea, virtualization is a technique, based on software, to execute several servers and their corresponding OSes on the same physical hardware. Virtualization can be implemented on many architectures, such as IBM© mainframes, many distributions of Unix© and Linux, Windows©, Apple©, and so on.

We already mentioned that the virtualization is based on software, but there are two main kinds of software you can use to virtualize your servers. The first type of software is the one that behaves as any other application installed on the server and is also known as workstation or software-based virtualization. The second one is part of the kernel on the host OS, and is enabled as a service. This type of software is also called as hardware virtualization and it uses special CPU characteristics (as Data Execution Prevention or Virtualization Support), which we will discuss in the installation section. The main difference is the performance you can have when using either of the types. On the software/workstation virtualization, the request for hardware resources has to go from the application down to the OS into the kernel in order to get the resource. In the hardware solution, the virtualization software or hypervisor layer is built into the kernel and makes extensive usage of the CPU's virtualization capabilities, so the resource demand is faster and more reliable, as in Microsoft © Hyper-V Server 2008 R2.