Book Image

VMware NSX Cookbook

By : Bayu Wibowo, Tony Sangha
Book Image

VMware NSX Cookbook

By: Bayu Wibowo, Tony Sangha

Overview of this book

This book begins with a brief introduction to VMware's NSX for vSphere Network Virtualization solutions and how to deploy and configure NSX components and features such as Logical Switching, Logical Routing, layer 2 bridging and the Edge Services Gateway. Moving on to security, the book shows you how to enable micro-segmentation through NSX Distributed Firewall and Identity Firewall and how to do service insertion via network and guest introspection. After covering all the feature configurations for single-site deployment, the focus then shifts to multi-site setups using Cross-vCenter NSX. Next, the book covers management, backing up and restoring, upgrading, and monitoring using built-in NSX features such as Flow Monitoring, Traceflow, Application Rule Manager, and Endpoint Monitoring. Towards the end, you will explore how to leverage VMware NSX REST API using various tools from Python to VMware vRealize Orchestrator.
Table of Contents (19 chapters)
Title Page
Packt Upsell
Foreword
Contributors
Preface
Index

Deploying the NSX Controller Cluster


The NSX controller cluster is an integral part of any NSX for vSphere deployment; the NSX controller cluster is responsible for:

  • Managing the vSphere hypervisor routing and switching modules
  • Managing the ARP table, MAC table, and VXLAN network identifier (VNI) information of the entire vSphere for NSX deployment
  • Distributed Logical Router:
    • Interfaces
    • Layer 2 Bridging Tables
    • Routes

Note

The NSX Controller Cluster is the control plane for all networking constructs in an NSX deployment, however, the Distributed Firewall control plane is managed by the NSX Manager itself.

Getting ready

The following are things to consider before deploying the NSX controller cluster:

  • The controller cluster has three controllers in total and must be deployed in a cluster of three.
  • Each controller node should reside on a separate ESXi host; DRS anti-affinity rules should be used to enforce this rule. It is generally recommended to deploy controllers on a vSphere cluster with a minimum of four ESXi hosts.
  • Sufficient resources (vCPU, memory, and storage) on the vSphere cluster.
  • NSX controller nodes should be deployed onto shared storage that is highly available.
  • Each NSX controller requires an IPv4 address; these addresses are allocated via the NSX IP pool construct.
  • NSX controllers require connectivity to NSX Manager and vSphere management VMKernel IP addresses.
  • NSX controller should reside on a VLAN-backed PortGroup.

The NSX Controller IP Pool requires the following details prior to configuration. You can change values to suit your environment:

Component

Value

Name

IP-Pool-NSX-Controllers

Gateway

192.168.1.254

Prefix Length

24

Primary DNS

192.168.1.110

Secondary DNS

DNS Suffix

corp.local

Static IP Pool

192.168.1.31 - 192.168.1.33

How to do it...

In the following sub-sections, we proceed to start deploying the NSX controller cluster, which is required for the logical networking components in NSX.

Configuring an NSX IP pool

Before deploying the NSX controller cluster, an IP pool must be configured to reserve three IPv4 addresses on the network:

  1. In the vCenter Web Client, navigate to Networking & Security | NSX Managers | NSX Manager
  2. Select IP Pools and click on the plus sign
  3. Fill in the details as per the preceding table and click on OK

The IP Pool can be configured during deployment of the NSX controller cluster as well in the event it is not configured beforehand.

NSX Controller Cluster deployment

In this section, we will deploy each of three NSX Controllers on our vSphere cluster:

  1. In the vCenter Web Client, navigate to Networking & Security | Installation
  2. Under the NSX controller nodes menu pane, click on the plus sign
  3. Fill in the NSX controller details for the first node as follows and then click on OK:

Name

Value

Name

NSX_Controller_1

Datacenter

RegionA01

Cluster/Resource Pool

RegionA01-MGMT01

Datastore

RegionA01-iSCSI-MGMT

Host

Optional

Folder

Optional

Connected To

VM-RegionA01-vDS-MGMT

Password

VMware1!VMware1!

  1. After the first controller is deployed, repeat steps 1 to 3 for the remaining two controllers

Once all the controllers have been deployed, you should see the following displayed under the Installation tab in Networking & Security, with the green boxes indicating healthy connectivity between each of the peers in the controller cluster:

DRS Anti-Affinity Rules

DRS anti-affinity rules are required to ensure that the NSX controllers do not reside on the same physical host and are kept separate on dedicated ESXi hosts. This is to ensure in the event a ESXi host goes down where all three controllers are potentially running as guest VMs, the entire control plane for logical networking is not lost. If two controllers are lost, then the remaining controller goes into read-only mode until a cluster majority is restored.

It's important to note that the underlying infrastructure should still be designed for HA and resiliency, which includes compute/network/storage.

Configuring DRS anti-affinity rules via the vSphere web client:

  1. In the vCenter Web Client, navigate to Hosts and Clusters | Management Cluster | Manage | Settings | VM/Host Rules
  2. Click on Add...
  3. Choose Type as Separate Virtual Machines
  1. Add NSX controller virtual machines and click on OK:
Configuring DRS Anti-Affinity Rules via PowerCLI

You can also configure DRS anti-affinity rules using PowerCLI. To configure via PowerCLI, you will need to ensure PowerCLI has been deployed and installed locally on your system. Perform the following steps to configure the DRS rules via PowerCLI:

  1. Open the PowerCLI terminal up.
  2. Type Connect-VIServer -Server VCENTER_SERVER, which will connect your PowerCLI session to the vCenter server you are working on.
  1. Next, we want to retrieve the NSX controller virtual machines and store it as a variable, $nsx_controllers, using the get-vm PowerCLI cmdlet. The following code snippet demonstrates the command:
$nsx_controllers = get-vm | ? {$_.name -like "NSX_Controller*"}
  1. Next, using the New-DRSRule cmdlet, we will configure the anti-affinity DRS rule on the RegionA01-MGMT01 vSphere cluster using the following command:
New-DrsRule -Name nsx-controller-anti-affinity -Cluster RegionA01-MGMT01 -KeepTogether $false -VM $antiAffinityVMs

There's more...

In the following sub-sections, placement of the NSX Controllers and Controller password configuration will be discussed in greater detail.

Separate vCenter environment

The controller cluster is deployed in a group of three. Each controller node can only be deployed onto a vSphere cluster that is part of the vCenter inventory that the NSX Manager you are configuring is paired with. In large environments with multiple vCenters, it is not uncommon for the vCenter server and NSX Manager to be deployed onto a dedicated vSphere cluster that is managed by an independent vCenter server that is deemed as management. In this scenario, the NSX controller cluster cannot be deployed onto the dedicated management vSphere cluster.

Controller password parameters

The NSX controller password must meet the following criteria:

  • It must not contain the username as a substring
  • A character must not be repeated consecutively more than three times
  • It must be at least 12 characters long and must follow three of the following four rules:
    • It must have at least one uppercase letter
    • It must have at least one lowercase letter
    • It must have at least one number
    • It must have at least one special character

In the event that the NSX controller password is forgotten, it can be easily changed using the following steps:

  1. Log into the vSphere Web Client
  2. Click on the Networking & Security tab and then navigate to Installation | Management:
    1. Under the NSX Controller nodesmenu, selectActions
    2. Click on Change Controller Cluster Password
    3. Type a new password following the preceding guidelines and click on OK: