VSAN provides multiple storage-policy options to help you define the best operating parameters for your VMs. They are described in detail here.
This policy option defines how many node failures your object should survive (or how many fault domains can be lost in vSphere 6.0). Node-failure tolerance is achieved by building mirrors of your objects. Those mirrors are distributed throughout the cluster in such a way that the specified number of hosts can fail. To accomplish this, VSAN will create n+ 1 copies of the data, where n is the number of failures to tolerate.
As quorum-based availability for an object requires that >50 percent of all data components and witnesses be available (see Appendix B, Additional VSAN Information), specifying more failures to tolerate requires a larger number of hosts. While VSAN will create n+1 copies of the data, it requires 2n+1 nodes with storage capacity to be available to ensure that the >50 percent rule is not violated. Specifying two failures to tolerate requires (2(2)) + 1 nodes, or five nodes in the VSAN cluster.
Regardless of the number of hosts in the cluster, there is a failure-to-tolerate limit of three nodes.
This policy option defines how many physical hard disk drives (spindles) should be used for each mirror copy of the data. This is analogous to traditional RAID-0. Striping across multiple spindles can help improve performance at the cost of additional complexity within an object.
It is important to note, however, that the number of stripes we can define in any given infrastructure may be lower than the 12-stripe absolute maximum. As we are specifying the number of physical drives that should be consumed, we cannot specify more stripes than we have physical disks. If we have only four disks per host in the capacity tier, for example, and we want to have the fault-tolerance to survive a node failure in a three-node cluster, the maximum number of stripes we can define will be four.
This policy option lets you specify whether you want the resulting VSAN object to be thin-provisioned, thick-provisioned, or somewhere in between. As opposed to "thin" or "thick" provisioning being a binary choice on traditional storage platforms, within VSAN there is a sliding scale for thick provisioning. You can specify any whole-percentage value for this policy option, between 0 percent (completely thin) and 100 percent (completely thick).
This policy option lets you specify whether or not VSAN is permitted to violate the specified policy in order to create the object. By default, this option is set to
no for most object types. Force provisioning can be useful if, for example, you ordinarily thick-provision your VMs but you are approaching a capacity limitation in the cluster pending the installation of additional disks/disk groups or hosts.
This policy option lets you reserve SSD read cache for the object, as a percentage of the object's size. This can be useful for VMs or objects facing significant read performance constraints but misuse of this policy option can crowd out other IO and cause serious negative consequences in the VSAN cluster.
The size of the SSD read cache (70 percent of the size of the SSD).
While the preceding restriction is the only technical limitation, it is important to understand how the use of this policy option can affect the VSAN cluster. As the cache reservation is a function of the object's size rather than the cache's size, it is very easy to accidentally overprovision cache reservations and starve other objects (or even the object itself, in extreme cases) of cache resources.
If you have a 100GB SSD drive, for example, 70GB of it will be used for read cache. If you specify a storage policy with a 10 percent cache reservation and apply it to two 500GB virtual disk objects, you have immediately overprovisioned your available cache on any given node and cluster performance will severely suffer.
This policy option should be used sparingly and only to address a performance problem with specific VM disks. This policy option should not be used by default.