Book Image

Implementing VMware Horizon View 5.2

By : Jason Ventresco
Book Image

Implementing VMware Horizon View 5.2

By: Jason Ventresco

Overview of this book

VMware Horizon View helps you simplify desktop and application management while increasing security and control. This book will introduce you to all of the components of the VMware Horizon View suite, walk you through their deployment, and show how they are used. We will also discuss how to assess your virtual desktop resource requirements, and build an optimized virtual desktop. "Implementing VMware Horizon View 5.2" will provide you the information needed to deploy and administer your own end-user computing infrastructure. This includes not only the View components themselves, but key topics such as assessing virtual desktop resource needs, and how to optimize your virtual desktop master image. You will learn how to design and deploy a performant, flexible and powerful desktop virtualization solution using VMware Horizon View. You will implement important components and features, such as VMware View Connection Server, VMware View Composer, VMware View Transfer Server, and VMware View Security Server."Implementing VMware Horizon View 5.2" will take you through application virtualization with VMware ThinApp, the implementation of Persona Management, and creation of Desktop Pools. We then cover View Client options, Desktop maintenance, and Virtual Desktop Master Image. Finally we discuss View SSL certificates management, Group Policies, PowerCLI, and VMware View Design and Maintenance to help you get the most out of VMware View.If you want to learn how to design, implement and administrate a complex, optimized desktop virtualization solution with VMware View, then this book is for you.
Table of Contents (21 chapters)
Implementing VMware Horizon View 5.2
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Using Performance Monitor to properly size the infrastructure


Once we have gathered desktop Performance Monitor data, we need to perform some data analysis to determine how to size our View infrastructure. This section will outline the processes used to take raw Performance Monitor data, and use it to determine our View infrastructure requirements.

Basics of sizing a View infrastructure

One of the most accurate ways to determine our infrastructure requirements is to take an average of each of the Performance Monitor counter values we have gathered, which should provide us with a per-desktop figure for the amount of resources that a given desktop type requires.

The first thing that must be taken into consideration is whether or not we plan on separating our virtual desktops based on any sort of metric or other user classification. In the previous section, we broke down users into one of three different groups: Task Workers, Knowledge Workers, and Power Users. Each group has different desktop performance expectations, and as their expected performance requirements increase, their tolerance for events that impact that performance decreases. Each user base is different, of course, but when designing our View infrastructure we should consider whether or not we should provide unique storage, network, or compute resources for each of our own user classes. The following provides an example of how that might be accomplished:

  • Task Workers:

    • Higher desktop consolidation ratios per vSphere host

    • Lower tier storage

  • Knowledge Workers:

    • Average desktop consolidation rations per vSphere host

    • Medium tier storage

  • Power Users:

    • Low desktop consolidation ratios per vSphere host

    • High performing storage

    • Network QOS to guarantee desktop bandwidth availability

The analysis done in this section assumes that we are sizing a View infrastructure for one classification of user, and not multiple user classifications that may have different performance requirements. As we discussed earlier, our final design may allocate unique resources to each user classification in order to provide the expected level of performance.

Interpreting Performance Monitor data

The following screenshot shows a portion of the Performance Monitor data collected from a sample Windows desktop. This data was imported from the CSV file created by the Performance Monitor application.

Column A displays a time reference showing that the data was gathered in 15-second intervals, as configured in the previous section. Row 1 displays the counter names, which are arranged by default in alphabetical order.

The following table shows the average value of each of the Performance Monitor counters from our sample desktop. To make the results easier to read, the data recorded in Bytes was converted to Megabytes.

Performance Monitor Counter

Average Value

Memory Committed Megabytes per second

2,443.4 Megabytes

Network Total Megabytes per second

0.75 Megabytes

Disk Reads per second

7.25 Reads

Disk Read Megabytes per second

0.109 Megabytes

Disk Writes per second

10.09 Writes

Disk Write Megabytes per second

0.120 Megabytes

% Processor Time

13.80 percent Processor Time

This data provides the starting point for determining the amount of resources we need to provide for each virtual desktop, and by extension how many desktops we can run on each vSphere host.

Note

Storage for our virtual desktops can be provided using a number of different solutions that include both server-based (local) storage, and shared storage arrays. The Performance Monitor data we have collected includes counters for the number of Disk Reads and Disk Writes per second, which is the basis for properly sizing whichever storage solution we plan to use.

Regardless of which storage protocol your vSphere hosts uses, there will be some overhead involved. After you have measured your baseline disk bandwidth (Disk Read or Write Megabytes per second) or IO (Disk Reads or Writes per second) from your reference desktop, add 15 percent to the value recorded prior to calculating your overall resource requirements. The sample calculations in this chapter involving Disk Reads, Disk Writes, Disk Read Megabytes per second, and Disk Write Megabytes per second assume that you have already added the 15 percent overhead.

Server processor configurations are a good starting point for determining how many desktops we can run per vSphere server. While most server types can accommodate a number of different memory configurations, they support a fixed number of processors, and each of those processors comes with a specific number of CPU cores. For the purpose of this exercise, we will assume that we have existing servers that we want to use for our View infrastructure.

Server Resource

Quantity

Physical Processor Count

2

Cores Per Processor

8

Memory

144 GB

Network Interfaces

10 GB—2 interfaces

Fiber Channel Interface

4 GB (800 MB)—2 interfaces

Using these specifications, we can determine exactly how many View desktops we should be able to host on this server. The goal is to determine which resource is the limiting factor, based on the average values obtained during our Performance Monitor data collection. To determine the number of desktops supported, we divide the aggregate quantity of server resources by the average usage of that resource as determined by our analysis of the performance monitor data. View supports up to 16 virtual desktop CPUs per physical processor core, but your own environment may support less based on the average desktop CPU utilization.

With regard to virtual desktop memory, it is important to remember that the amount of memory we assign to the desktop should be at least 25 percent higher than the average value obtained from our Performance Monitor Memory Committed Megabytes counter data. The reason for this is that we want to prevent the desktop from having to utilize the Windows paging file during spikes in memory utilization, which would increase the amount of I/O that the storage infrastructure must service, and potentially impact virtual desktop performance.

Note

Storage technologies such as all-flash storage arrays and flash storage installed directly within the vSphere hosts can lessen the performance impact of virtual desktop memory swapping. Just be sure to assign the virtual desktops the minimum memory required by the OS vendor to ensure that your configuration will be supported.

The previous table shows our four core resources: server processor power measured in number of cores, server memory, server network bandwidth, and storage network bandwidth. To calculate the number of desktops supported,we use the following calculations:

  • Processor: (Number of server cores * 100) / % Processor Time of reference desktop:

    • (16 * 100) / 13.8 = 115.94 desktops

  • Memory: Total server memory in MB / (Memory Committed MB per second of reference desktop * 1.25):

    • 1,47,456 / (2,443.4 * 1.25) = 48.28 desktops

    Note

    The value obtained when you multiply the desktop Memory Committed MB per second times 1.25 (2,443.4 * 1.25 = 3,054.25 MB) indicates that each desktop requires 3 GB of memory, which should provide sufficient free memory, and in turn reduce likelihood of having to use the Windows paging file.

  • Network: Total server network bandwidth in MB / Network Total MB per second of reference desktop:

    • 2,560 / 0.75 = 3,413 desktops.

    Note

    Remember to convert the network adapter line speeds from megabit to megabyte to match the output format of the Performance Monitor data. The following formula is used to perform the conversion:

    Value in megabits / 8 = Value in megabytes

  • Storage Network: Total server storage network bandwidth in MB / (Disk Read MB per second + Disk Write MB per second) of the reference desktop:

    • 1,600 / (0.109 + 0.120) = 6,987 desktops.

These calculations assume that we are using a dedicated storage network to connect our vSphere servers to a storage array, in this case Fibre Channel. In the event that our storage network will utilize the same network connections as our virtual machine traffic, we will need to combine both the values observed for Network Total MB per second, Disk Read MB per second, and Disk Write MB per second when determining how many desktops our vSphere host can accommodate.

Using the numbers from the previous example, we will calculate the maximum number of desktops the server could host, assuming the server has only the two 10 GB connections for all network traffic, and no Fibre Channel storage network exists:

  • Network: Total server network bandwidth in MB / (Network Total MB per second of reference desktop + Disk Read MB per second + Disk Write MB per second) of the reference desktop:

    • 2,560 / (0.75 + 0.109 + 0.120) = 2,614 desktops

To determine the minimum specifications for the storage solution we will use to host our virtual desktops, we need to take the average number of Disk Reads and Writes per second from our Performance Monitor data and multiply that number by the number of desktops we wish to host. The following calculation shows an example of how we would calculate the required I/O per second, also known as IOPS, that our storage solution is required to service:

  • Data used for calculations:

    • Performance Monitor Disk Reads per second: 7.25

    • Performance Monitor Disk Writes per second: 10.09

    • Number of desktops to size the storage solution for: 500

    • Average IOPS for one 15K RPM SAS disk drive: 175 IOPS

  • (Disk Reads per second + Disk Writes per second) * Total number of desktops = Total IOPS required by the virtual desktop storage solution.

    • (7.25 + 10.09) * 500 = 8,670 IOPS

  • Our calculations tell us that our storage array will need to service at least 8,670 IOPS, which would require at least fifty 175 IOPS, 15K RPM SAS disk drives. Our calculations are based on the raw IOPS capabilities of the disk drives, and do not take into account the overhead required to implement a redundant array of inexpensive disks (RAID) using a large quantity of disks. The actual number of disks required to service 8,670 IOPS will be more than fifty; how much more is dependent on the architecture of our storage array.

The following table summarizes the number of desktops that the server could host, based on the current hardware configuration. The server referenced in this table has distinct networks for storage and virtual machine networking.

Server Total Resources

Desktop Average Utilization

Number of View Desktops Supported

16 processor cores

13.80 percent (of one core)

115.94

144 GB memory

2,443.4 MB

48.28

20 GB network bandwidth

0.75 MB

3,413

8 GB storage network bandwidth

0.23 MB

35,772

Based on the data that was gathered from the Performance Monitor session and the specifications of the servers, we can host a maximum of 60 desktops on these servers as they are currently configured. As the table indicates, our limiting resource is server memory. If the server supported it, and we wanted to maximize the number of virtual desktops the server could host, we could increase the amount of memory in the server and host up to 115 desktops, which is the maximum supported based on the processor configuration. To determine how much memory would be required, we would apply the calculation used earlier, in reverse:

  • Memory Committed MB per second of reference desktop * 1.25 * Number of desktops we want to host = Amount of memory required in MB:

    • 2,443.4 MB * 1.25 * 115 = 351,239 MB = 343 GB

In this example, if we were to increase the memory in our server to at least 343 GB, we could host 115 desktops.

In both the examples, the number of desktops that the servers could support, based on network bandwidth, may seem rather high. One of the reasons for this is that we are basing our calculated maximum on the combined capacity of our two 10 GB connections, which not every server may have. The second, and more common reason, is that our Performance Monitor data did not generate a significant amount of disk or network traffic. This is why it is critical that we gather Performance Monitor data from as many desktops as is feasible, across multiple job or user types within our organization.

The resources required by the virtual desktops are the most important part of determining vSphere capacity, but not the only factor we must take into consideration. The next section will discuss how virtual machine overhead and planning for vSphere failure or maintenance affects our sizing calculations.

Virtual Desktop overhead and vSphere reserve capacity

When determining the absolute maximum number of desktops we can host on a given vSphere host, we must take into account topics such as virtual machine overhead, accommodating vSphere host failures or maintenance, and other similar factors.

Calculating virtual machine overhead

vSphere requires a certain amount of memory and a small amount of CPU resources to manage the hosted virtual machines, including the ability to power them on. The amount of resources required for overhead is typically minimal compared to the resources required by the virtual machines themselves, but it is important not to determine capacity solely by using the calculations from the previous section.

The vSphere console can provide an estimate of the expected amount of memory overhead required for a given virtual machine. The following screenshot shows where the memory overhead is displayed in the Summary tab of the virtual machine properties:

The following table shows the expected amount of memory required to support a virtual machine of several different memory and processor configurations. This information is useful for determining how much memory should be available on a given host in order to properly manage the guest virtual machines.

Virtual Machine Memory

1 vCPU

2 vCPUs

1024 MB

101.06 MB

123.54 MB

2048 MB

121.21 MB

146.71 MB

3096 MB

141.35 MB

169.88 MB

4096 MB

161.50 MB

193.04 MB

The overhead associated with an individual virtual machine is subject to change during the operation of the virtual machine. There is no definitive way to calculate what this overhead will be, but these figures are considered a reasonable estimation.

Note

The figures listed in the previous table represent not only the per virtual machine memory overhead, but also the amount of memory that must be available in order to power on the virtual machine. If our vSphere host refuses to power on a virtual machine, lack of available memory is the likely cause.

The calculations we performed in the previous section revealed that we should assign 3 GB of memory to each of our sample Virtual Desktops. For the purpose of this calculation, we will assume that these Virtual Desktops require only one virtual CPU (vCPU), which is common for all but the heaviest of Power User desktops. Based on the vCPU and memory configuration, each Virtual Desktop would require an additional 141.35 MB of memory to be available for vSphere overhead, which would require another 16 GB be added to our previous server configuration, bringing the total to 359 GB of memory required to host 115 desktops.

Note

These examples assume that we intend to operate our vSphere hosts at 100 percent capacity at all times. In many cases, including the one we will discuss next, this will not necessarily be the case. Just remember that operating any individual component of our View infrastructure at 100 percent capacity can lead to problems if our initial design does not take into account all possible contingencies, including spikes in usage, hardware failure, and other similar events.

The need for vSphere reserve capacity

A second reason to not fully populate a vSphere host is to be able to accommodate all the desktops in the event of a vSphere host failure or host maintenance operation. Consider a vSphere cluster with eight vSphere servers hosting 128 desktops each (1024 total desktops):

  • Desktop requirements:

    Note

    Desktop requirements will vary from one environment to the next; these figures are just an example.

    • Each single vCPU desktop requires 10 percent of one vSphere server CPU core (average % Processor Time)

    • Each desktop requires 2,048 MB of memory (average Memory Committed Megabytes)

  • Eight vSphere hosts, each running 12.5 percent of the total number of Virtual Desktops:

    • 1024 desktops / 8 vSphere hosts = 128 desktops per host

  • To continue to run all the desktops in the event one vSphere host were to become unavailable, we would need to be able to accommodate 18.29 desktops on each of the remaining seven hosts.

    • 128 desktops / 7 remaining vSphere hosts = 18.29 desktops per each vSphere host

  • To continue to run all desktops without any degradation in the quality of service, each server needs to have an excess of capacity that is sufficient to host 18-19 desktops. This is:

    • 19 desktops * 10% of a CPU core = 1.9 available CPU cores required

    • 19 desktops * 2,048 MB of memory = 38,912 MB or 38 GB of available memory required

    • 19 desktops * 121.21 MB of memory for virtual machine overhead = 2303 MB or 2.3 GB of additional available memory required

    • 19 desktops * 0.75 MB network bandwidth = 14.25 MB of available network bandwidth required

    • 19 desktops * 0.23 MB storage network bandwidth = 4.37 MB of available storage network bandwidth required

These calculations assume that we want to protect the ability to provide resources for 100 percent of our desktops at all times, which is a very conservative yet valid approach to building a View infrastructure.

The final configuration of the vSphere servers should take into account not only what percentage of desktops are actually in use at a given time, but also the cost of purchasing the additional capacity needed to support vSphere host failures or other events that require downtime.

Note

Always take into consideration the growth of our View environment. Purchasing equipment that has limited ability to scale may save money today, but could cost you dearly when we need to expand. If a piece of equipment you plan to buy for your View infrastructure just barely meets your needs, look into the next larger model or even a competing product, if necessary.