As a distributed storage layer, VSAN is exceedingly dependent on robust and reliable networking. In keeping with this, VSAN requires specific networking configurations and some non-mandatory optimizations that can be put in place to improve VSAN's performance and reliability. This appendix will discuss these considerations.
For host configurations with comparatively few NICs (such as hosts with the straightforward and common 2x10GbE configuration), the VSAN portgroup should be able to access both interfaces.
If bandwidth management is a consideration (for example, to balance VSAN network demands against management and VM workloads when everything shares two or more NICs), strongly consider implementing Network IO control in the vSphere Distributed Switch. Regardless of the vCenter license level, the activation of a VSAN license automatically entitles you to use the Distributed Switch for exactly this reason.
On smaller clusters (five nodes or fewer), 1GbE networking will be adequate for production workloads. When using 1GbE networking, however, VM deployments and VSAN resync/rebuild activity will likely be network-constrained.
On larger clusters and/or extremely high-capacity clusters with significant resync-related traffic expectations during maintenance or failure, 10Gbe or better should be considered mandatory. Network traffic becomes much heavier as the number of nodes scales up, and having 10GbE becomes more important.
VSAN is fully supported in combination with link-aggregation schemes supported by vSphere ESXi. VSAN's performance can be improved, particularly on larger clusters, with the addition of link aggregation.
As data movement occurs over many unicast connections between hosts, the overall network load balances nicely as cluster utilization scales up when using link aggregation schemes, such as "Route Based on IP hash" for static port channels, or LACP when that protocol is supported by the upstream switch(es) and you are using the vSphere Distributed Switch.
VSAN uses a combination of multicast and unicast traffic. Cluster-related tasks like directory services, quorum maintenance, status updates, inventory management, and so on, use multicast to minimize the amount of bandwidth consumed by these tasks. For this reason, all nodes in the VSAN cluster must be connected to a physical switch capable of processing multicast traffic. Many modern switches can automatically adjust to multicast demands and self-configure. Other switches may need to have IGMP snooping enabled. Some may need complete manual configuration of IGMP groups and queriers.
It is well worth it to take the time to properly configure and validate physical-switch multicast configuration requirements before VSAN is enabled.
No VSAN production data is transmitted using multicast. When quorum is formed against an object and synchronous replication begins, the host-to-host communication for any given object, and the data movement required to service it, will occur over unicast. Many unicast connections will be made between hosts to accommodate object-level production.
Jumbo frames can provide a nominal performance boost to VSAN deployments by reducing the packet and frame overhead. It is important to note, however, that not all network interfaces and drivers support jumbo frames for multicast, even if they do for unicast traffic.
If you are planning to use jumbo frames in your VSAN infrastructure, ensure that the configuration is absolutely consistent across all nodes and physical switches. It is also strongly recommend that you perform extensive validation and testing before rolling out to production, if jumbo frames are to be used.