Prepare Host Clusters for Network Virtualization

By | August 22, 2018

In this post, I will cover following topics of Objective 1.2 of VCAP6-NV Deploy exam:

Objective 1.2 – Prepare Host Clusters for Network Virtualization

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add/Remove Hosts from Cluster
  • Configure the appropriate teaming policy for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Prepare vSphere Distributed Switching for NSX

Firstly, make sure that the minimum MTU for NSX should be 1600 bytes.  The requirement for 1600 bytes is due to the original Ethernet frame being encapsulated with additional headers for VXLAN, UDP and IP; hence  increasing its size, and is now its called a VXLAN Encapsulated Frame.

Fully configured the vDS is also required and make sure you have migrated portgroups and uplinks etc from vSS to vDS because NSX works only with distributed switch and not with standard switches.

To verify/configure MTU on vDS, select the vDS from list and navigate to Manage > Settings > Properties tab and hit edit button and under ‘Advanced’ tab, change the MTU to 1600 and hit OK.

Prepare a cluster for NSX

The hosts have to be prepared for NSX. This will install the NSX kernel modules (vib files). This VIBs includes NSX kernel modules, userspace agents,configuration files, and install scripts and once installed it enable the Layer 2 VXLAN functionality, distributed routing, and distributed firewall and creates the Control and Management Plane for NSX.

Before doing host preparation, ensure following prerequisites have been met:

  • Make sure you have the NSX Manager and Controllers deployed
  • The cluster hosts are joined to the same vDS
  • DNS F/R records in-place
  • Your vCenter is functioning
  • You need to disable VUM before deploying Controllers.

If you have a dedicated Management Cluster, you probably would not install NSX on those hosts.

Log into the vSphere Web Client.

Note: Make sure that there are no issues with the cluster prior Host preparation task and if issue detected, select your cluster then click the Actions icon, If Resolve appears it means a host needs a reboot.  If you cannot clear Resolve you will need to move the hosts to a new cluster and delete the old one.

To prepare ESXi hosts for NSX installation, click on Networking & Security and navigate to Installation > Host Preparation tab and select the appropriate cluster and click on gear button to start installation of NSX VIB’s.

NSX will start pushing the required VIB’s on host.

Once  VIB’s are successfully installed on each host and overall cluster/hosts make sure it shows green tick and state ‘Enabled‘.

Add Hosts to Cluster

When you add a new host to a cluster that is prepared for NSX, VIB’s are automatically pushed to the new host.  The best way to add Esxi hosts to an existing NSX prepared cluster is:

  1. Add the host to vCenter but not in the NSX prepared cluster. Keep the host into maintenance mode.
  2. Add the newly added host to same vDS where other hosts of the NSX prepared cluster are joined.
  3. Move the newly added ESXi host into the cluster and wait for VIB installation to complete.
  4. Once the VIB installation is completed, remove host from maintenance mode.

Remove Hosts from Cluster

You can remove host from cluster in two ways, via the CLI or from the GUI. Both require a reboot.

From the command line:

esxcli software vib remove –vibname=esx-vxlan

esxcli software vib remove –vibname=esx-vsip

From the GUI:

Put the host in maintenance mode.

Move the host out of the cluster, the VMkernel modules will be removed.

Reboot the host.

Take the host out of maintenance mode.

Sometimes NSX fails to remove a VIB from the Esxi host and in such situation we have to manually remove the VIB’s from host via command line. To remove VIB’s from a host via command line, connect to host via SSH and execute below commands:

For NSX 6.2

For NSX 6.3 and later

Configure the appropriate teaming policy for a given implementation

In the exam you might asked by providing a physical connectivity diagram or architecture design goal that we will need to take into account.

Teaming Policies will determine how network traffic is load balanced over the physical network cards (pNIC) on the hosts.

Looking at the documentation they might focus on the VTEP (VXLAN Tunnel Endpoint) configuration and how many to configure: one or two.

The VTEPs are responsible for encapsulating and de-encapsulating traffic from VXLAN networks. Every host has one or two VTEPs. They are assigned a VMkernel port that connects to a specific VLAN that the overlay (VXLAN) will operate on.

Why have more than one VTEP on a host? One reason is to load balance your VXLAN traffic over multiple pNICs.

Your host should have at a minimum two Uplinks, and they might either be connected to the same switch or split between switches, LACP might be in use or not, and these all determine how you will configure your Teaming Policy.

Below image taken from NSX Reference Design Guide, gives a rough idea of which teaming policy can be used in NSX


Single VTEP Uplink Options

single vtep.PNG

Multi-VTEPs Uplink Option


The recommended teaming mode for VXLAN traffic for ESXi hosts in the Compute Cluster is LACP. It provides sufficient utilization of both links and reduced fail-over time. It offers simplicity of VTEP configuration and troubleshooting.

For ESXi hosts in the Edge Clusters it is recommended to avoid the LACP or Static EtherChannel options. One of the main functions of the edge racks is providing connectivity to the physical network infrastructure and this is typically done using a dedicated VLAN-backed port-group where the NSX Edge (handling the north-south routed communication) establishes routing adjacencies with the next hop L3 devices. Selecting LACP or Static EtherChannel for this VLAN-backed port-group when the ToR switches perform the roles of L3 devices complicates the interaction between the NSX Edge and the ToR devices.

The main thing to take away. Understand the teaming policies above. I can see them asking about this. Know how to configure hosts and the vDS. Read the architecture papers and install guides.

This article has explained VTEP teaming policy beautifully via various examples.

Configure VXLAN Transport parameters according to a deployment plan

Make sure your MTU on the vDS is set at 1600 or higher.

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Host Preparation tab.

Select your cluster, hover over it and click the blue cog. Click Configure VXLAN.

Configure your options: vDS, VLAN, MTU, IP Pool, Teaming Policy.

I created a new IP Pool called IP_Pool_VTEP on the network.

Click OK to apply the changes.

The VXLAN status changes to Configured.

In vCenter, the hosts that have been configured for VXLAN now show a VMkernel port configured on the network ont the Compute vDS.

Segment IDs

They are probably looking also for the configuration of the VXLAN Segment IDs. The Segment ID specifies a set range of VXLAN Network Identifies (VNIs) that can be used, or for simpler terms the number of Logical Switches. The segment isolates VXLAN traffic.

The range starts at 5000 and ends at 16777216 providing some 16.7 million network segments. An example Segment ID range you might configure is 5000-7000, we will walk through this soon.

Considering VLANs are limited to 4094 that’s a pretty big number. For Service Providers the ability to expand beyond the VLAN limit is huge.

With the current  6.2.x versions of NSX there is at present a limit of 10,000 VNIs. This is due to the vCenter vDS configuration maximum of 10,000 port groups.

Make sure if you have more than one NSX implementation (i.e. cross-vCenter NSX) that your Segment IDs do not overlap.

Note: On a single vCenter NSX deployment if you want to add multiple Segment ID ranges you cannot do this from the Web Client and it will need to be done via the CLI (exam task there!).

From the VMware Installation Guide this is the process to add multiple Segment IDs:


To configure Segment IDs from the vSphere Client the process is shown below:

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Segment ID tab, then click Edit.

Enter your Segment ID range, I have chosen 5000-7000

Read the relevant bits from the Install Guide and also the Administration Guide.

In Next post we will cover: Objective 1.3 – Configure and Manage Transport Zones

I hope this has been informative and thank you for reading! Be social and share it on social media, if you feel worth sharing it…!!!

Reference : by Clinton Prentice & VMware Documentations.