+

Search Tips  |   Advanced Search

VMware NSX-V design

Network virtualization provides a network overlay that exists within the virtual layer. Network virtualization provides the architecture with features such as rapid provisioning, deployment, reconfiguration, and destruction of on-demand virtual networks. This design uses the vDS and VMware NSX for vSphere to implement virtual networking.

In this design, the NSX Manager is deployed in the initial cluster. The NSX Manager is assigned a VLAN-backed IP address from the private portable address block, which is designated for management components and configured with the DNS and NTP servers that are presented in Common services design.

The following figure shows the placement of the NSX Manager in relation to other components in the architecture.

Figure 1. NSX Manager network overview

After initial deployment, the IBM Cloud automation deploys three NSX controllers within the initial cluster. Each of the controllers is assigned a VLAN-backed IP address from the Private A portable subnet that is designated for management components. Additionally, the design creates VM-VM anti-affinity rules to separate the controllers among the hosts in the cluster. The initial cluster must contain a minimum of three nodes to ensure high availability for the controllers.

In addition to the controllers, the IBM Cloud automation prepares the deployed vSphere hosts with NSX VIBS to enable the use of a virtualized network through VXLAN Tunnel Endpoints (VTEPs). The VTEPs are assigned a VLAN-backed IP address from the Private A portable IP address range that is specified for VTEPs as listed in VLANs. The VXLAN traffic resides on the untagged VLAN and is assigned to the private vDS.

Then, a segment ID pool is assigned and the hosts in the cluster are added to the transport zone. Only unicast is used in the transport zone because Internet Group Management Protocol (IGMP) snooping is not configured within the IBM Cloud. Two VTEP kernel ports are configured per host on the same VTEP dedicated subnet per VMWare best practice.

After that, if the instance has public network interfaces, two NSX Edge Services Gateway pairs are deployed. One gateway pair is used for outbound traffic from automation components that reside in the private network. A second gateway that is known as the customer-managed edge, is deployed and configured with an uplink to the public network and an interface that is assigned to the private network. For more information about the NSX Edge Services Gateways deployed as part of the solution, see NSX Edge Services Gateway solution architecture.

Cloud administrators can configure any required NSX components, such as Distributed Logical Router (DLR), logical swit's, and firewalls. The available NSX features depend on the NSX license edition that you choose when you order the instance. See VMware NSX edition comparison.

The NSX Manager is installed with the specifications that are listed in the following table.

Attribute Specification
NSX Manager Virtual appliance
Number of vCPUs 4
Memory 16 GB
Disk 60 GB on the management NFS share
Disk type Thin-provisioned
Network Private A portable designated for management components


Distributed switch design

The design uses a minimum number of vDS Swit's. The hosts in the cluster are connected to the public and private networks. The hosts are configured with two distributed virtual swit's. The use of two swit's follows the practice of IBM Cloud network that separates the public and private networks. The following diagram shows the vDS design.

Figure 2. Distributed switch design

As shown in the previous figure, one vDS is configured for public network connectivity (SDDC-Dswitch-Public) and the other vDS is configured for private network connectivity (SDDC-Dswitch-Private). Separating different types of traffic is required to reduce contention and latency and increase security.

VLANs are used to segment physical network functions. This design uses three VLANs: two for private network traffic and one for public network traffic. The following table shows the traffic separation.

VLAN Designation Traffic type
VLAN 1 Private A ESXi management, management, VXLAN (VTEP)
VLAN 2 Private B vSAN, NFS, and vMotion
VLAN 3 Public Available for internet access

Traffic from workloads will travel on VXLAN­-backed logical swit's.

The vSphere cluster uses two virtual Distributed Swit's that are configured as in the following tables.

vSphere Distributed
Switch Name
Function Network
I/O Control
Load Balancing
Mode
Physical NIC
Ports
MTU
SDDC-Dswitch-Private ESXi management, vSAN, vSphere vMotion, VXLAN tunnel endpoint, NFS (VTEP) Enabled Route based on explicit failover (vSAN, vMotion) originating virtual port (all else) 2 9,000
(Jumbo frames)
SDDC-Dswitch-Public External management traffic (north-south) Enabled Route based on originating virtual port 2 1,500
(default)

The names, number, and ordering of the host NICs might vary depending on the IBM Cloud data center and your host hardware selection.

Parameter Setting
Load balancing Route based on the originating virtual port

1

Failover detection Link status only
Notify swit's Enabled
Failback No
Failover order Active uplinks: Uplink1, Uplink2

2

Port Group Teaming Uplinks VLAN ID
SDDC-DPortGroup-Mgmt Originating virtual port Active: 0, 1 VLAN 1
SDDC-DPortGroup-vMotion Use explicit failover order Active: 0, 1 VLAN 2
SDDC-DPortGroup-VSAN Route based on originating virtual port Active: 0, Standby: 1 VLAN 2
SDDC-DPortGroup-NFS Originating virtual port Active: 0, 1 VLAN 2
NSX generated Originating virtual port Active: 0, 1 VLAN 1
SDDC-DPortGroup-External Originating virtual port Active: 0, 1 VLAN 3

Purpose Connected port group Enabled services MTU
Management SDDC-DPortGroup-Mgmt Management Traffic 1500 (default)
vMotion SDDC-DPortGroup-vMotion vMotion Traffic 9000
VTEP NSX generated - 9000
vSAN SDDC-DPortGroup-VSAN vSAN 9000
NAS SDDC-DPortGroup-NFS NAS 9000


NSX configuration

This design specifies the configuration of NSX components but does not apply any network overlay component configuration. You can design the network overlay based on your needs.

The following aspects are preconfigured:

The following aspects are not configured:

Figure 3. Deployed example customer NSX topology


Public network connectivity

There are various reasons that you may need public network connectivity for the instance. This can include access to public update services or other public services for your workload such as geolocation databases or weather data. Your virtualization management and add-on services might also require or benefit from public connectivity. For example, vCenter can update its HCL database and obtain VMware Update Manager (VUM) updates over the public network. Zerto, Veeam, VMware HCX, F5 BIG-IP, and FortiGate-VM all use public network connectivity for some part of their product licensing, activation, or usage reporting. On top of this, you might use tunnels over the public network for connectivity to your on-premises data center for replication purposes.

Typically these communications are selectively routed and NATed to the public network through the management or customer edge services gateway (ESG). However, you might have more security requirements, or might prefer to use a proxy to simplify the path of communication. Additionally, if you deployed the instance with public interfaces disabled, you will not be able to use ESGs to route to the public network.

This architecture allows for the following options for routing or proxying your traffic to the public network:

Method Description Limitations
Virtualized gateway Deploy a virtualized gateway (for example, NSX ESG, F5 BIG-IP, FortiGate-VM, or a virtual appliance of your choosing) crossing the private and public network. Configure routing on the source system (for example, vCenter, Zerto, your workload) to direct only public network traffic to the gateway, and configure the gateway according to your needs. Applicable only to instances with public interfaces enabled. This configuration allows for both outbound and inbound traffic patterns.
Virtualized gateway with proxy Deploy a virtualized gateway as above. Behind this gateway, deploy a proxy server, and configure your services and applications to connect to the public network through this proxy. Applicable only to instances with public interfaces enabled. Outbound traffic patterns can use the proxy but inbound traffic patterns must be managed at the gateway.
Hardware gateway Deploy a hardware gateway appliance to your management VLAN. Configure the gateway to NAT outbound to the public network according to your needs. Applicable to all instances, with or without public interfaces enabled. This configuration allows for both outbound and inbound traffic patterns.
Hardware gateway with proxy Deploy a gateway appliance as above. Behind this gateway, deploy a proxy server, and configure your services and applications to connect to the public network through this proxy. Applicable to all instances, with or without public interfaces enabled. Outbound traffic patterns can use the proxy but inbound traffic patterns must be managed by the gateway.
Load balancer IBM Cloud offers several load balancer services that you can use to provide inbound network access to your applications. Applicable to all instances, but limited to inbound traffic patterns.

Next topic: VMware NSX-T design


Related links