Configure and Manage Logical Load Balancing

By | July 10, 2018

In this post, we will cover following topics of Objective 3.1 of VCAP6-NV Deploy Exam

Objective 3.1 – Configure and Manage Logical Load Balancing

  • Configure the appropriate Load Balancer model for a given application topology
  • Configure SSL off-loading
  • Configure a service monitor to define health check parameters for a specific type of network traffic
  • Optimise a server pool to manage and share back-end servers
  • Configure an application profile and rules
  • Configure virtual servers

Configure the Appropriate Load Balancer Model for a given Application Topology

Before you begin, you must have a functioning NSX Edge Services Gateway (ESG) to configure load balancing, the Distributed Logical Router (DLR) does not support this feature.

The firewall on the ESG must be enabled. Load balancing or NAT cannot be used while it is disabled.

NSX load balancing (from now on referred to as ‘LB’) works at Layer 4 dealing with packets or Layer 7 with sockets.

Packet-based LB at Layer 4 just sends/forwards on the packets to the destination, working with TCP or UDP. It does a bit of work and then sends them on.

Socket-based LB at Layer 7 needs to receive the entire request before sending it on. It’s working with HTTP or HTTPS protocols.

Probably important to know the default mode for a NSX LB is Socket-Based for TCP, HTTP and HTTPS (UDP being the exception).

Also note using Layer 7 socket-based LB will have some impact on the sizing of the Edge. The Edge may need to be resized. VMware recommends Quad-Large or X-Large for LB. Sizing shown on this VMware diagram.


Most LBs use the same concepts just called something different. This is what NSX calls its main LB services:

Virtual Server: This is a virtual IP and port combination and is listening for requests. Your LB hosts this IP and port. It front-ends your Server Pool. E.g: your virtual server might be

Server Pool: This is a grouping of servers either physical or virtual that can service the incoming request e.g. web servers. It normally contains more than one server for redundancy.

Server Pool Member: This is a single instance of one of the Server Pool members.

Service Monitor: These probe the health status of pool members to determine the health of the pool.

Below is some more information on the LB features. This is from the VMware Validated Design documentation. The default Layer 7 socket-based LB has a larger feature set.


A design decision that needs to be made is to either have one ESG for all LB services, or have a single ESG for each application that needs to be load balanced.

If you have simple LB requirements this might mean that you just modify your central Edge Services Gateway (ESG) and configure that for LB.

If your requirements are higher or you want to segregate LB services you might prefer to deploy a new ESG for each application to be LB. This has a higher overhead but creates a smaller blast radius should someone mis-configure something; and it also means no one from the app’s team is messing with your central ESG.

The NSX Edge services gateway supports two types of load balancer  deployments, One-armed mode (or proxy mode) and Inline mode (or  transparent mode)

One-armed mode (or proxy mode): In proxy mode, the load balancer uses its own IP address as the source address to send requests to a backend server. The backend server views all traffic as being sent from the load balancer and responds to the load balancer directly. Following events take place when LB is deployed in proxy mode:

  1. User connects to a VIP address (LB address) that is configured on the Edge gateway.
  2. The ESG performs a destination NAT to replace the VIP with one of the servers in the configured pool.
  3. The ESG performs a source NAT to replace the users IP address with its own IP address (VIP).
  4. The ESG forwards the request to one of the server from pool.
  5. The server replies to the ESG instead of replying to the user request because the users IP address was replaced by ESG VIP.
  6. The ESG relays the servers response to the user.

Below image shows one-armed mode deployment


This model is simpler to deploy and provides greater flexibility than traditional load balancers. It allows deployment of load balancer services directly on the logical segments without requiring any modification on the centralized NSX Edge providing routing communication to the physical network. On the downside, this option requires provisioning more NSX Edge instances and mandates the deployment of source NAT that does not allow the servers in the datacentre to have visibility into the original client IP address. The load balancer can insert the original IP address of the client into the HTTP header before performing S-NAT – a function named “Insert X-Forwarded-For HTTP header”. This provides the servers visibility into the client IP address, and is limited to HTTP traffic.

Inline mode (or  transparent mode): An in-line load balancer is connected to the network with two network interfaces. In this scenario the load balancer has an external network, and an internal network that’s not directly accessible from the external network. An inline load balancer acts as a NAT gateway for the VMs on the internal network. The traffic flow is displayed in the following diagram:


Following events take place in an inline load balancer deployment model

  1. User connects to a VIP address (LB address) that is configured on the Edge gateway.
  2. The ESG performs a destination NAT to replace the VIP with IP address of one the servers from the pool.
  3. The ESG forwards the request to the server.
  4. The server receives the request from the ESG with the user IP as the source and replies directly to the user.
  5. As the server replies to the user, the response goes through the web servers default gateway, which is the ESG.
  6. The ESG updates the load balancing service and forwards the response to the Uplinks.

This deployment model is also quite simple, and additionally provides the servers full visibility into the original client IP address. It is less flexible from a design perspective as it usually forces the load balancer to serve as default gateway for the logical segments where the server farms are deployed. This implies that only centralized, rather than distributed, routing must be adopted for those segments. Additionally, in this case load balancing is another logical service added to the NSX Edge which is already providing routing services between the logical and the physical networks. Thus it is recommended to increase the form factor of the NSX Edge to X-Large before enabling load-balancing services.

For more information on differences between the 2 modes, please read VMware NSX Design Guide

Summed up; if you have one central ESG, configure load balancing and it has an interface on both external and internal networks, then the deployment is in Transparent/Inline mode. The VMs on load balanced network will have this ESG as their default gateway.

If an ESG has only one interface on the network containing the load balanced servers then the deployment is in Proxy/One-Arm mode. The VMs on the load balanced network cannot use this ESG as their default gateway and must use the default gateway address configured for the network (the gateway defined when the logical switch was connected to the ESG or DLR).

It’s possible in the exam we are tasked with deploying an ESG, asked to configure from scratch, enable and configure load balancing. The deployment goal VMware gives will determine how to deploy and configure the Edge. It’s the interface configuration that determines your load balancing mode (transparent/inline or proxy/one-armed).

Make sure you practice deploying ESGs and configuring the interfaces in different ways.

My ESG is currently configured in Transparent/Inline mode, thus has an uplink and an internal network. Below is a screen shot of my interfaces. As we walk through this blog we will be configuring load balancing on this ESG.


How to Enable NSX Load Balancing

This is a pretty simple process once you have deployed and configured your ESG.

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG to configure LB on.

Click Manage, then Load Balancer.

Click Global Configuration then click the Edit button.

Select the Enable Load Balancer option. You can additionally define NSX 3rd party services.


Load balancing is now enabled, but still needs to be configured for your application to be load balanced.

Below is the basic configuration for load balancing two web servers.

I have  created a Pool called WebPool which contains two web servers: Web 1 VM and Web 2 VM.


I configured an HTTP Application Profile as below.


I configured a Virtual Server as below.


I installed Microsoft IIS on both web VMs. Hitting the VIP I get the default IIS page from one of the web servers.


That’s a real basic overview. You need to configure firewall rules to allow external connections. The DNAT rules are automatically created. I have decided not to go into detail here and return focus to the lab objectives. Practice deploying Edges and configuring LB with active VMs i.e. web servers to test the configuration.

Important Note: On the Edit Pool screen when in the Load Balancing section there is a tick box for Transparent. This makes the source client IP visible to the back-end servers. By default the back-end servers will see the source as the internal IP of the LB. Don’t get this confused with Transparent Mode (Inline). It’s not the same thing!


Configure SSL Off-Loading

At this point I have the ESG Load Balancing configured with a virtual server listening on HTTP port 80 with a pool connected which contains two web server VMs with IIS installed.

I could implement SSL certificates on my web servers and change the load balancing configuration from HTTP 80 to HTTPS 443 and just pass through the SSL traffic to the web servers, or I could leave it how it is and configure SSL off-loading.

SSL off-loading is where some ‘thing’ looks after the processing (encryption/decryption) of traffic sent by SSL which takes the SSL processing load off the servers. This ‘thing’ is the NSX Edge Services Gateway (ESG). In simpler terms, the front-end from client to LB is configured with HTTPS, and the back-end from LB to web servers is configured with HTTP. The ESG does the SSL off-loading.

I couldn’t find much information on SSL off loading in the NSX documentation. After trial and error I got it working though.

Create a certificate for your NSX Edge

Note: You can also use self-signed certificates. I have gone a step further and have configured CA signed certificates from my internal CA.

At a high-level the steps are:

  • Create the CSR
  • Copy the CSR content and upload to your CA
  • Retrieve both the certificate and Root CA certificate
  • Import the Root CA certificate
  • Import the certificate

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure a certificate for.

Click Manage, then Certificates.

Click the blue cog (actions) and select Generate CSR.

Populate the fields with relevant information.


Now copy the content under PEM Encoding which is the CSR. Upload the CSR to your CA and retrieve the signed web services certificate, also the Root CA certificate.


At this point you have two certificates (a) Root CA and (b) web services certificate.

On your ESG, click Manage, then Certificates.

Click the green + sign to Add a CA Certificate.

Paste the contents of your CA Root Certificate.


You can now see the Root CA certificate in the console.


Now copy the contents of your web services certificate. Click on your CSR in the console and then select the blue cog (actions). Click Import Certificate.


Paste the content of your web services certificate.


You can now see in the console the CA and web services certificates.


You now have the full certificate chain and can proceed to configure SSL Off-Loading by modifying the Application Profile from the Load Balancer tab.

Tick the box to Enable the web service Service Certificate. Change the Type to HTTPS.


Followed by enabling the CA certificate.


Lastly, modify the Virtual Server. Change the protocol to HTTPS and the Port to 443.


The virtual server IP (VIP) is and I have configured DNS to point webfarm.lab.local at the VIP.

In a browser when I hit the URL of https://webfarm.lab.local I am presented with a secure SSL session (padlock closed in IE).


All good. SSL off-load complete.

Configure a Service Monitor to Define Health Check Parameters for a Specific Type of Network Traffic

Service Monitor is just a health check.You select a protocol and some options i.e. protocol, port, the interval, timeout and retry values. The configured Service Monitor is then attached to the Pool of load balanced servers. The Service Monitor determines the health state of the servers in the pool. Its stops client requests being sent to a server that has failed.

The options in NSX for Service Monitors are:

Interval: How often the monitor will poll the server in seconds
Timeout: The maximum time for the response to be received in seconds
Retries: The number of times to recheck before deeming the server offline

HTTP/HTTPS: Can configure a GET or POST and text to send and expected to receive
TCP/UDP: Can configure ports, text to send and expected to receive
ICMP: It is what it is: pings

If you define the Service Monitor as HTTP or HTTPS you will also need to populate the ExpectedMethod and URL fields.

  • Expected: The string the monitor expects to match in the response e.g. HTTP/1.1
  • Method: GET or POST etc
  • URL: The URL to send in the request

Send: The data to send to the server

Receive: The string to be matched to the response. If EXPECTED is not matched, the monitor does not try to match the Receive content.

The VMware documentation on Service Monitors is really poor!

Service Monitors once configured are added to a Pool containing servers to monitor.

For example, the below shows my WebPool that has the default HTTP Service Monitor attached – which does an HTTP GET, and that’s all. It doesn’t check the response content, only there is a listening web service on port 80.


I recommend that you configure some web servers like I have and play with Service Monitors to get a good understanding of the different ways they can be deployed.

Optimize a Server Pool to Manage and Share Back-End Servers

Well Pools is a pretty simple concept. It’s a grouping of similar systems that service the same requests. Think web servers, all of them will be configured the same way (that happens in production doesn’t it? lol), all handle the same requests and a client connection could be sent to and serviced by any system in the pool.

You have a Virtual Server front-ending the Pool with a virtual IP and port.

You use Service Monitors to check the health of the individual servers in the Pool.

You additionally use what I would call load balancing Policies, or what NSX calls Algorithms. These policies or algorithms determine how client requests are load balanced over the servers. NSX supports six different load balancing Policies.

Round-Robin: Pretty simple to understand, it just keeps flip-flopping equally between all the servers in the pool.

Least Conn: The server with the least connections will be sent the next client request.

IP-Hash: Selects a server-based on a hash of an IP address and a weighting.

HTTP HEADER: HTTP Header name is looked up. Server host name can be determined.

URL: Selects a server by basically a hash of the URL and a weighting.

URI: Left part of the URL is hashed and weighted.

There isn’t that much you can do with pools. You can create and delete a pool. You can assign a policy or algorithm to a pool. You can add members to a pool and select the port, the weighting for the server and you can remove members from a pool.

I am not going to screenshot all this so make sure you configure your pool in different ways and see the results.

My WebPool settings: (normally 2 servers, only 1 there when screenshot taken)


And how a member of the pool is configured:



Configure an Application Profile and Rules

Application Profiles and Application Rules are different beasts. So I will explain them separately.

Application Profile

An Application Profile is basically a template and determines how traffic is handled or manipulated. You create a profile, determine protocols, session persistence, cookies, SSL and the like. The profile is then attached to a Virtual Server. The Virtual Server will process the traffic based on the configuration of the profile.

An Application Profile will be defined by the application requirements e.g. it requires the HTTPS protocol, no session persistence and uses SSL certificates for SSL Termination etc.

I have a basic Application Profile called Web Profile that is for my web farm. As shown below.


To configure Application Profiles:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Application Profiles.

Click the green + sign to add a new Application Profile.

There are a few settings that can be configured: (see above screen shot)

Type: The incoming traffic protocol

HTTP Redirect: Gives the ability to redirect to another URL

Persistence: The glue that sticks a session to the same back-end server. NSX supports: cookies, source IP, MSRDP (*for RDSH farms)

Insert X-Forwarded For: This allows back-end servers to see the client source IP

Enable Pool Side SSL: Allows SSL communication between LB and back-end servers (*required for end to end SSL)

Service Certificates: SSL certificates required to terminate SSL on the LB

Application Rules

An Application Rule allows more granular control than an Application Profile. Application Rules are created and also attached to a Virtual Server to process the incoming traffic.

Think of an Application Rule as a trigger. If a condition is matched then do something.

To configure Application Rules:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Application Rules.

Click the green + sign to add a new Application Rule.


Once you have created the Application Rule you attach it to the Virtual Server, click the Advanced tab.


From the VMware NSX Administration Guide, below are some Application Rule examples:





Configure Virtual Servers

The Virtual Server to me is really the glue that joins all the load balancing bits together. The Virtual Server takes the settings from the Application Profile, the Pool, the protocol and port and allows you to create a virtual IP (VIP) that load balances you over a pool of servers.

To Add or Configure a Virtual Server:

Log into the vSphere Web Client.

Click Networking and Security, then NSX Edges.

Double-click the ESG that you want to configure for Load Balancing.

Click Manage, then Load Balancer, then Virtual Servers.

Click the green + sign to add a new Virtual Server, or select an existing Virtual Server and edit it. I have a Virtual Server called Webvs as shown below.


What Settings can be Defined on a Virtual Server (VS)?

Enable: Allows the VS to be enabled or disabled

Acceleration: If you are only using Layer 4 load balancing i.e. TCP or UDP, you can enable acceleration which enables higher performance. This means you are load balancing packets instead of HTTP/HTTPS protocols.

Application Profile: Select an Application Profile already defined

Name: The name for this Virtual Server

IP Address: The virtual IP (VIP) to listen for incoming requests

Protocol: The protocol the VIP will handle traffic for

Port: The port the VIP will listen on

Default Pool: The pool that the Virtual Server will load balance requests over

Connection Limit: The maximum number of concurrent connections

Connection Rate Limit: The maximum allowed new connection requests per second(CPS)

That’s all for this post.  I recommend you to read the below reference documents.

In Next post we will cover: Objective 3.2 – Configure and Manage Logical Virtual Private Networks (VPNs)

I hope this has been informative and thank you for reading! Be social and share it on social media, if you feel worth sharing it…!!!


I am Rahul Sharma, I am currently working as Subject Matter Expert for SDDC and Cloud Infrastructure Services, Mainly on VMware Virtualization Platform.

I have 9 Year’s of IT experience and have expertise in Designing and Deploying of VMware vSphere, vSAN, vCloud Director, vRealize Automation, SRM, NSX  and modern data center technologies like vBlock, Cisco UCS, DELL, HPE C7000, HPE Synergy HCI etc.

I am VCIX6-DCV, Dual VCP – DCV & NV, MSCE – Cloud, CCNA, ITIL v3 Certified.

2 thoughts on “Configure and Manage Logical Load Balancing

  1. Aaditya

    Nice Blog Sir……

    When will you be publishing blog for the next Objective 3.2



    1. Rahul Sharma Post author

      Hello Aaditya,

      Apologies for delay, currently i am busy in some other projects and will publish the next blog soon.

      Thanks for reading,



Leave a Reply

Your email address will not be published. Required fields are marked *