Prepare Host Clusters for Network Virtualization

By | July 3, 2018

In this post, I will cover following topics of Objective 1.2 of VCAP6-NV Deploy exam:

Objective 1.2 – Prepare Host Clusters for Network Virtualization

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add/Remove Hosts from Cluster
  • Configure the appropriate teaming policy for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Prepare vSphere Distributed Switching for NSX

Firstly, make sure that the minimum MTU for NSX should be 1600 bytes. Double check your vDS to confirm it’s configured at the appropriate size for your environment. The requirement for 1600 bytes is due to the original Ethernet frame being wrapped (encapsulated) with additional headers for VXLAN, UDP and IP; thus increasing its size, and is now called a VXLAN Encapsulated Frame.

Also make sure that you have  fully configured the vDS and have migrated portgroups/uplinks etc from vSS to vDS because NSX works only with distributed switch and not with standard switches.

To verify/configure MTU on vDS, select the vDS from list and navigate to Manage > Settings > Properties tab and hit edit button and under ‘Advanced’ tab, change the MTU to 1600 and hit OK.

Prepare a cluster for NSX

The hosts have to be prepared for NSX. This will install the NSX kernel modules (vib files) and creates the Control and Management Plane for NSX. When installing it’s on a cluster-wide basis.

Before doing host preparation, ensure following prerequisites have been met:

  • Make sure you have the NSX Manager and Controllers deployed
  • The cluster hosts are joined to the same vDS
  • DNS F/R records in-place
  • Your vCenter is functioning
  • You need to disable VUM before deploying Controllers.

If you have a dedicated Management Cluster, you probably would not install NSX on those hosts.

Log into the vSphere Web Client.

Note: Make sure that there are no issues with the cluster prior. Select your cluster then click the Actions icon. If Resolve appears it means a host needs a reboot.  You can also click on the red ‘X’ to tell you the issue. If you cannot clear Resolve you will need to move the hosts to a new cluster and delete the old one.

To prepare ESXi hosts for NSX installation, click on Networking & Security and navigate to Installation > Host Preparation tab and select the appropriate cluster and click on gear button to start installation of NSX VIB’s.

prep2

NSX will start pushing the required VIB’s on host.

prep3

Once  VIB’s are successfully installed on each host and overall cluster/hosts make sure it shows green tick and state ‘Enabled‘.

prep5

Add Hosts to Cluster

When you add a new host to a cluster that is prepared for NSX, VIB’s are automatically pushed to the new host.  The best way to add Esxi hosts to an existing NSX prepared cluster is:

  1. Add the host to vCenter but not in the NSX prepared cluster. Keep the host into maintenance mode.
  2. Add the newly added host to same vDS where other hosts of the NSX prepared cluster are joined.
  3. Move the newly added Esxi host into the cluster and wait for VIB installation to complete.
  4. Once the VIB installation is completed, remove host from maintenance mode.

Remove Hosts from Cluster

You can remove host from cluster in two ways, via the CLI or from the GUI. Both require a reboot.

From the command line:

esxcli software vib remove –vibname=esx-vxlan

esxcli software vib remove –vibname=esx-vsip

From the GUI:

Put the host in maintenance mode.

Move the host out of the cluster, the VMkernel modules will be removed.

Reboot the host.

Take the host out of maintenance mode.

Sometimes NSX fails to remove a VIB from the Esxi host and in such situation we have to manually remove the VIB’s from host via command line. To remove VIB’s from a host via command line, connect to host via SSH and execute below commands:

For NSX 6.2

For NSX 6.3 and later

Configure the appropriate teaming policy for a given implementation

In the exam they might test us by providing a physical connectivity diagram or architecture design goal that we will need to take into account.

Teaming Policies will determine how network traffic is load balanced over the physical network cards (pNIC) on the hosts.

Looking at the documentation they might focus on the VTEP (VXLAN Tunnel Endpoint) configuration and how many to configure: one or two.

The VTEPs are responsible for encapsulating and de-encapsulating traffic from VXLAN networks. Every host has one or two VTEPs. They are assigned a VMkernel port that connects to a specific VLAN that the overlay (VXLAN) will operate on.

Why have more than one VTEP on a host? One reason is to load balance your VXLAN traffic over multiple pNICs.

Your host should have at a minimum two uplinks, and they might either be connected to the same switch or split between switches, LACP might be in use or not, and these all determine how you will configure your Teaming Policy.

Below image taken from NSX Reference Design Guide, gives a rough idea of which teaming policy can be used in NSX

nsx9

Single VTEP Uplink Options

single vtep.PNG

 

Multi-VTEPs Uplink Option

b

The recommended teaming mode for VXLAN traffic for ESXi hosts in the Compute Cluster is LACP. It provides sufficient utilization of both links and reduced fail-over time. It offers simplicity of VTEP configuration and troubleshooting.

For ESXi hosts in the Edge Clusters it is recommended to avoid the LACP or Static EtherChannel options. One of the main functions of the edge racks is providing connectivity to the physical network infrastructure and this is typically done using a dedicated VLAN-backed port-group where the NSX Edge (handling the north-south routed communication) establishes routing adjacencies with the next hop L3 devices. Selecting LACP or Static EtherChannel for this VLAN-backed port-group when the ToR switches perform the roles of L3 devices complicates the interaction between the NSX Edge and the ToR devices.

The main thing to take away. Understand the teaming policies above. I can see them asking about this. Know how to configure hosts and the vDS. Read the architecture papers and install guides.

This article has explained VTEP teaming policy beautifully via various examples.

Configure VXLAN Transport parameters according to a deployment plan

Make sure your MTU on the vDS is set at 1600 or higher.

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Host Preparationtab.

Select your cluster, hover over it and click the blue cog. Click Configure VXLAN.

Configure your options: vDS, VLAN, MTU, IP Pool, Teaming Policy.

I created a new IP Pool called VTEPpool on the 172.16.0.0./24 network. The hosts Management network is 10.0.0.0/24.

vdszz

Click OK to apply the changes.

The VXLAN status changes to Configured.

vxlan

In vCenter, the hosts that have been configured for VXLAN now show a VMkernel port configured on the 172.16.0.0/24 network ont the Compute vDS.

compu

Segment IDs

They are probably looking also for the configuration of the VXLAN Segment IDs. The Segment ID specifies a set range of VXLAN Network Identifies (VNIs) that can be used, or for simpler terms the number of Logical Switches. The segment isolates VXLAN traffic.

The range starts at 5000 and ends at 16777216 providing some 16.7 million network segments. An example Segment ID range you might configure is 5000-7000, we will walk through this soon.

Considering VLANs are limited to 4094 that’s a pretty big number. For Service Providers the ability to expand beyond the VLAN limit is huge.

With the current  6.2.x versions of NSX there is at present a limit of 10,000 VNIs. This is due to the vCenter vDS configuration maximum of 10,000 port groups.

Make sure if you have more than one NSX implementation (i.e. cross-vCenter NSX) that your Segment IDs do not overlap.

Note: On a single vCenter NSX deployment if you want to add multiple Segment ID ranges you cannot do this from the Web Client and it will need to be done via the CLI (exam task there!).

From the VMware Installation Guide this is the process to add multiple Segment IDs:

segment

To configure Segment IDs from the vSphere Client the process is shown below:

Log into the vSphere Web Client.

Click Networking and Security, then Installation followed by the Segment ID tab, then click Edit.

Enter your Segment ID range, I have chosen 5000-7000

seg2

seg3

Read the relevant bits from the Install Guide and also the Administration Guide.

In Next post we will cover: Objective 1.3 – Configure and Manage Transport Zones

I hope this has been informative and thank you for reading! Be social and share it on social media, if you feel worth sharing it…!!!

Hello,

I am Rahul Sharma, I am currently working as Subject Matter Expert for SDDC and Cloud Infrastructure Services, Mainly on VMware Virtualization Platform.

I have 9 Year’s of IT experience and have expertise in Designing and Deploying of VMware vSphere, vSAN, vCloud Director, vRealize Automation, SRM, NSX  and modern data center technologies like vBlock, Cisco UCS, DELL, HPE C7000, HPE Synergy HCI etc.

I am VCIX6-DCV, Dual VCP – DCV & NV, MSCE – Cloud, CCNA, ITIL v3 Certified.