ML helps computers to work without being explicitly programmed in a user-friendly way so anyone can learn and utilize it in their daily life such as health, research, science, finance, and intelligent system:
Assigning a GPU in NVIDIA GRID vGPU
We have to configure a 3D farm like a normal farm in Horizon:
- Configure this pool in the same way as we used to configure the pool in Horizon, until we reach the
Desktop Pool Settings
section. - Scroll to the
Remote Display Protocol
section in theAdd Desktop Pool
window. - We have to choose between two of the following options in the
3D Renderer
option:- Choose either
Hardware
orAutomatic
for vSGA - Choose
Hardware
for vDGA or MxGPU
- Choose either
- Set the default display protocol to
PCoIP
in theDesktop Pool
settings and allow users to decide to chooseNo
in the dropdown with3D Renderer
to NVIDIA GRID VGPU. - To enable the
NVIDIA vGPU
, enable vGPU support for a virtual machine:
- Click the
New PCI Device
bar and chooseShared PCI Device
and thenAdd
to continue:
We can configure acceleration in three ways with VMware Horizon:
- Virtual shared graphics
- Virtual dedicated graphics
- Virtual shared passthrough graphics
vSGA is the driver that supports DirectX and OpenGL. vDGA configurations do use the native graphics card driver. SVGA or VMware SVGA 3D is the VMware Windows Display Driver Model-compliant driver included with VMware Tools on Windows virtual desktops. This 3D graphics driver can be installed on Windows for 2D/3D and can also can be utilized for both 3D and vSGA software.
VMware SVGA 3D can be configured for both 2D/3D software and vSGA deployments, and a virtual desktop can be rapidly switched between either with software or hardware acceleration, without any change to the existing configuration. vSGA supports vMotion with hardware-accelerated graphics configuration. Universal driver will work across platform without any further configuration:
The server’s physical GPUs are virtualized and shared with the number of guest virtual machines residing on the same host server with vSGA techniques. We have to integrate a specific driver in the hypervisor and all guest virtual machines will leverage the VMware vSGA 3D driver. vSGA has performance limitations with few applications which don't have needed API support and also have limited support for OpenGL and DirectX.
There are three existing 3D settings in vSphere and View Pool settings. We can enable or disable 3D to set the 3D setting to automatic through vSphere. If we change the 3D configuration, then it will revert back the amount of video memory to the default value of 96 MB, so be sure before changing the video memory. These configurations have the following: Automatic (by default), Software, and Hardware:
Enable 3D Support
.- Set the
3D Renderer
toAutomatic
orHardware
. - Decide on the 3D video memory. By default, it is 96 MB, but it can be a minimum of 64 MB and a maximum of 512 MB:
Now we will set up the virtual machine settings for vGPU with following screenshot:
The preceding image will give us multiple configuration options as per application requirement with all security measures.
- Select the virtual machine to be configured and click
Edit
Settings
. First, add aShared PCI Device
and then choose theNVIDIA GRID vGPU
to enable GPU passthrough on the virtual machine:
- Choose the required profile from
GPU Profile
drop-down menu:
GPU profile string 4q notifies the size of the frame buffer (VRAM) in GB and the needed GRID license.
VRAM 0,1 notifies 512 MB, 1,024 MB, respectively, and so on. GRID license types are as follows:
- GRID virtual PC vGPUs for business desktop computing notifies with b
- GRID virtual application vGPUs for remote desktop session hosts notifies with a
- Quadro Virtual Data Center Workstation (vDWS) for workstation-specific graphics features and accelerations, such as up to four 4K monitors and certified drivers for professional applications notifies with q:
Click Reserve all memory
when creating a virtual machine. We can manage end-to-end NVIDIA virtual GPU solutions such as Quadro vDWS and NVIDIA GRID Virtual PC (vPC) with complete vGPU visibility into their entire infrastructure at the host, guest, or application level. This helps us to become more responsive and agile for a better end-user VDI experience.
We can deliver a far better user experience from high-end virtual workstations to enterprise virtual workspaces, which are cost effective to purchase, easy to deploy, and efficient to operate.
Users such as engineers, designers, content creators, and architects using the Pascal-based GPU with the Quadro vDWS software are able to get the best experience of running both accelerated graphics and compute (CUDA and OpenCL) workloads on any virtual workstation or laptop.
Knowledge workers use programs such as Windows 10, Office 365, and YouTube, which need graphics acceleration to achieve a better virtual desktop user experience using the NVIDIA Pascal™-based GPU with NVIDIA GRID™ virtual PC. NVIDIA NVENC delivers better performance and user density by off-loading H.264 video encoding from CPU to Linux virtual workstation users, which is a heavy compute task. Horizon provides customers with a single platform to publish all kinds of desktops (Windows and Linux) and applications, as per the user's graphics requirement.
NVIDIA GRID has software editions based on specific use cases:
- NVIDIA GRID Virtual Applications (vApp): We can use it for app virtualization or RDSH-based app publishing.
- vPC: It will be suitable for a virtual desktop providing standard desktop applications, browser, and multimedia.
- NVIDIA GRID Virtual Workstation (vWS): This will be worthwhile for scientists and designers who work on powerful 3D-content creation applications such as CATIA, S, 3DExcite, Schlumberger Petrel, or Autodesk Maya, and so on. vWS only has this NVIDIA Quadro driver.
NVIDIA GRID software editions can be purchased in both annual subscription, perpetual license, and in combination with support. A high-availability license server ensures users get uninterrupted work even in situations where a primary license server goes offline; then, a secondary license server will provide the license services to clients.
NVIDIA virtual GPU solutions and Maxwell-powered GPUs (NVIDIA® Tesla® M60, M6, and M10) are supported in this Pascal-based launch. NVIDIA virtual GPU solutions will be supported on all Pascal GPUs with the Tesla P40 and always-recommended P6 (blade) with the appropriate software licenses.
Even if you have Maxwell-powered GPUs with a NVIDIA GRID solution, we require Pascal GPUs to benefit from the performance improvements, increased frame buffer, larger and more granular profile sizes, bigger system memory, the ability to run both virtualized graphics and compute workloads to scale on the same GPU, and utilize the new task scheduler.
Features such as streamlining management and monitoring that help in application-level monitoring and integrations work on both Maxwell and Pascal cards with the NVIDIA GRID software release and GRID Management SDK 2.0. We have to choose the recommended Pascal/Maxwell boards for specific workloads.
We can recommend P40 or M60 for commercial customers. The P40 provides the highest performance, larger memory, and easier management, and enables the virtualization of graphics and compute (CUDA and OpenCL). The P40 is recommended when upgrading from M60 or K2 or the Skylake-based server. The M60 will continue to be offered and provides heterogeneous profiles and larger OEM server support.
M10 is suggested for customers with density-driven deployments, and for knowledge workers running everyday graphics-accelerated applications, the M10 is recommended. For high-density blade-server deployments, the P6 is a recommended to follow on to the M6.
We can leverage Quadro/GRID capabilities and compare it with VMware virtual workstation/PC/virtual apps solutions. NVIDIA GRID vWS is now NVIDIA Quadro Virtual Data Center Workstation or Quadro vDWS. The GRID brand will be used to describe a PC experience and will have two editions: NVIDIA GRID vPC and NVIDIA GRID vApps. While these 2 software editions were once called the NVIDIA GRID software platform, they will now be referred to as NVIDIA virtual GPU solutions.
MxGPU is a GPU virtualization technique with a built-in hardware engine responsible for VM scheduling and management. It leverages the underlying SR-IOV protocol as per the application's requirement. GPUs that are in passthrough mode can’t be virtualized, so first run the script to disable passthrough mode. If MxGPU is enabled and vCenter is accessible, then use the plugin to configure instead of the script. vDGA can help a user with unrestricted and dedicated access to a single vGPU by providing direct passthrough to a physical GPU. The steps for installing the driver on a VM using an MxGPU device are the same for a regular passthrough device under vDGA.
Configure the virtual machine while using MxGPU and vDGA:
For devices with a large BAR size, such as Tesla P40, we have to set the configuration parameters on the VM:
firmware="efi"
pciPassthru.use64bitMMIO="TRUE"
pciPassthru.64bitMMIOSizeGB="64"
- Add a
PCI Device
to the specific virtual machine and choose the requiredPCI Device
to enable GPU passthrough:
- Log into vSphere Web Client via the Administrator account on the
Home
page and clickRadeon Pro
Settings
. Go to theData Center
and manage all MxGPU hosts in a specific data center. - We can install
Radeon Pro Settings
on thevSphere Client
plugin with MxGPU:
VMware supports both AMD and NVIDIA graphics cards. We can download the appropriate VMware graphics driver from the vendor website to use the graphics card or GPU hardware. We can add PCI Device
to a single virtual machine as well as to multiple virtual machines.
- To add a
PCI Device
for a number of virtual machines in one go with commands, do the following:- Browse to the AMD FirePro VIB driver and install AMD VIB utility:
cd /<path_to_vib from ssh
. - Edit
vms.cfg: vi vms.cfg
.
- Browse to the AMD FirePro VIB driver and install AMD VIB utility:
- Press I and change the instances of
.*
to match the names of the VMs that require a GPU like to match*MxGPU*
to VM names that include MxGPU:.MxGPU
. - Save and quit by pressing Esc, type
:wq
, and press Enter.
- Assign the virtual functions to the VMs:
sh mxgpu-install.sh –a assign Eligible VMs: WIN10-MxGPU-001 WIN10-MxGPU-002 WIN8.1-MxGPU-001 WIN8.1-MxGPU-002 These VMs will be assigned a VF, is it OK?[Y/N]y
We should then verify that all VFs are populated in the device list. This way, we can automatically assign VF by using the script.