Book Image

PowerCLI Cookbook

By : Philip Brandon Sellers
Book Image

PowerCLI Cookbook

By: Philip Brandon Sellers

Overview of this book

Table of Contents (19 chapters)
PowerCLI Cookbook
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Index

Creating datastores on an ESXi host


With networking, VMware has done a lot of work to ease administration with the VMware Distributed Virtual Switch. In vSphere 5.5, VMware introduced Datastore Clusters that alleviate some of the management of datastores. However, from a provisioning standpoint, the initial setup of storage is still manual and can take a lot of manual steps. Scripting this makes a lot of sense in large environments.

Datastore and storage under vSphere is also different since some operations must be performed on the raw storage device and these steps are not repeated on every host. There are three types of storage connectivity that you might need to provision: NFS, iSCSI, and Fibre Channel. For this example, you will focus on iSCSI and NFS, and you will work on provisioning storage from both. Along the way, Fibre Channel will also be discussed since its concepts overlap with iSCSI from a vSphere perspective.

Getting ready

For this example, you will need to open a PowerCLI window and connect to an ESXi host. You will also want to make sure that you have the VMHost object stored in a variable named $esxihost, covered in the Getting the VMware host object section.

How to do it…

  1. The simplest of all datastores to provision is an NFS datastore. A single PowerCLI cmdlet will provision an NFS datastore. The New-Datastore cmdlet will take all of the input needed to provision the new datastore and make it available for use. Since NFS does not use the VMFS filesystem, there are no filesystem properties that need to be passed. To connect NFS, you just need to provide a name for vSphere to identify the datastore, a path (the export), and the host that is providing the NFS, as follows:

    $esxihost | New-Datastore -Nfs -Name DataStoreName -Path /data1/export -NfsHost nfsserver.domain.local
    
  2. With this, you've got your first datastore presented and ready to host virtual machines. For NFS, this is all that is required.

  3. iSCSI and Fibre Channel storage is a bit more complex to provision from a PowerCLI and vSphere perspective. Provisioning storage on either of these protocols will require additional decisions to be made when creating the datastore. iSCSI also requires additional configuration steps that are not needed with Fibre Channel. We will focus on iSCSI in this example and I will make a note of where the concepts overlap with Fibre Channel.

  4. iSCSI is an IP-based storage protocol, and as such, you will need to do a bit of network configuration to set up iSCSI to work in our environment. The first thing that needs to be done is to enable iSCSI and to create a software iSCSI target, as follows:

    $esxihost | Get-VMHostStorage
    
  5. By default, there isn't a software iSCSI target that is created. To create this, you need to expand on the previous cmdlet and set this value to true, as follows:

    $esxihost | Get-VMHostStorage | Set-VMHostStorage -SoftwareIScsiEnabled $true
    
  6. The next step is to set the iSCSI targets using the New-IscsiHbaTargets cmdlet. This cmdlet requires that you pass in the iSCSI HBA as an object, so first, you retrieve the iSCSI HBA using Get-VMHostHba and store it in a variable and then use it with New-IscsiHbaTargets:

    $iSCSIhba = $esxihost | Get-VMHostHba -Type iScsi
    
    New-IScsiHbaTarget -IScsiHba $iSCSIhba -Address $target -ChapType Required -ChapName vSphere -ChapPassword Password1
    

    Note

    In the example, there are additional parameters for authentication. iSCSI uses Challenge-Handshake Authentication Protocol (CHAP) to authenticate sessions to the target storage. Authentication is not required and if the storage system is not configured for authentication, these parameters can be omitted. However, it's a bad practice to deploy a production storage array without authentication.

  7. The final step of the initial iSCSI configuration is to bind the iSCSI HBA to a specific port. Since you created a Storage Network management port, this is the port that you want to use. To make this change and to remove any other ports, you have to use the ESXCLI interface within PowerCLI. There isn't a native PowerCLI cmdlet for this function:

    $esxcli = Get-ESXCLI -VMHost $esxihost
    $esxcli.iscsi.networkportal.add($iscsihba, $true,"vmk2")
    
  8. In our case, the vmkernel port assigned to the Storage Network port group is vmk2. Using the ESXCLI interface, you can assign it to the iSCSI HBA. To confirm the change, you can use the list() method, as follows:

    $esxcli.iscsi.networkportal.list()
    
  9. As you will see, there are other vmkernel ports listed. In my case, vmk0; you can remove them with a simple remove() method, as follows:

    $esxcli.iscsi.networkportal.remove($iscsihba,$true,"vmk0")
    

    Now that the system has its targets configured, if the iSCSI array has provisioned storage to the host, it should be visible. This is the point where iSCSI and Fibre Channel converge. Since iSCSI uses the host bus adapter model that Fibre Channel invented, they work in the same way after initial configuration. You must run the NFS mount on each server and you must set up iSCSI initial configuration on each host. Scanning and formatting VMFS datastores only needs to be done from a single host for iSCSI and Fibre Channel disks since they are shared resources. This means that when scripting the steps on each host, the next few steps only need to be done on a single host in the cluster and then every host needs to be a rescan:

    $esxihost | Get-VMHostStorage -RescanAllHBA -RescanVmfs
    

    Starting with a rescan is a good idea so that your system recognizes all of the storage changes and sees all disks that have been presented. Whether you're using software or hardware iSCSI, Fibre Channel, or converged network adapters, this is the point where your hosts see its SAN disks.

  10. At this point, your ESXi system doesn't have iSCSI or Fibre Channel datastores that it can use. Even though the disk is visible, it is unformatted and not ready to host VMs. To discover your disks and to enumerate the data you need to configure it, you will need to use the Get-ScsiLun cmdlet:

    $esxihost | Get-ScsiLun
    
  11. This returns a list of disks available to the SCSI subsystem under ESXi. The list might contain a lot of objects. You can use various properties returned by the ScsiLun object to identify and leverage the list for provisioning. For instance, you can scope the list using the Vendor property or by the model. For the purpose of this example, we will assume that you have a disk identified by the iSCSIDisk model and use that for scoping. To create a new datastore on the disk, you need the canonical name, which is also a property in the ScsiLun object:

    $LUN = $esxihost | Get-ScsiLun | Where {$_.Model -like "iSCSIDisk"} 
    
  12. In situations where you have many disks presented to a host, identification by model might not be the best. Another method would be to use the RuntimeName property that enumerates the HBA, controller, target, and the LUN number. For instance, if you know the LUN number you want to prepare is LUN 8 that is represented in the RuntimeName as L8, the PowerCLI to scope and return this would be as follows:

    $LUN = $esxihost | Get-ScsiLun | Where {$_.RuntimeName -like "*L8"} 
    
  13. By storing the LUN in a variable, we can verify the returned value to ensure that you have the correct object and number of objects expected before passing it into the New-Datastore cmdlet:

    $esxihost | New-Datastore -Name iSCSIDatastore1 -Path $LUN.CanonicalName -VMFS
    
  14. This provisions the disk as a VMFS filestore and allows it to be used for VM storage. At this point, you can initiate a rescan on all of the ESXi hosts in the cluster and they will all see the same shared storage.

How it works…

Provisioning datastores in vSphere works differently for each type of SAN storage. NFS is simpler than iSCSI or Fibre Channel and just requires that you connect (or mount) the datastore for use on the host. Software-based iSCSI requires that you do some additional configuration on the host so that it can connect to the target array, but then iSCSI and Fibre Channel both will work in the same way with backend storage LUNs being presented to the host for consumption.

See also

  • The Creating and managing datastore clusters and the Performing Storage vMotion recipes in Chapter 4, Working with Datastores and Datastore Clusters