In this recipe, we are going to see what Infiniband is and how it can be configured to create a high bandwidth network for a Proxmox cluster.
With very low latency, Infiniband (IB) competes with gigabit, 10 GbE and 100 GbE. Originated in 1999, Infiniband continues to provide the means to create high-performing cluster beating Ethernet both on latency and price. IB is an excellent choice for connecting a storage cluster to a compute cluster in a virtual environment. Linux provides full support for IBs. Some kernel modules must be loaded in order to use IB. Network configuration is similar to any other interface. IB configuration must be done through a CLI. It cannot be edited or configured through the Proxmox GUI.
Note
For more information on Inifniband, visit http://en.wikipedia.org/wiki/InfiniBand.
The following image is an example of an Infiniband QDR adapter from Mellanox Technlogoies Inc.:
There are different types of IB cards based on different speed and...