+

Search Tips  |   Advanced Search

VMware NSX-T design

Unlike NSX-V (NSX on vSphere), VMware NSX-T is designed to address application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere, these environments can include other hypervisors, KVM, containers, and bare metal. VMware NSX is designed to span a software defined network and security infrastructure across platforms other than just vSphere alone. While it is possible to deploy NSX-T components without needing vSphere, this design focuses on NSX-T and its integration primarily within a vCenter Server vSphere automated deployment.

There are many advanced features within NSX-T such as firewall policies, inclusion of guest introspection within firewall policies, and advanced netflow tracking. Describing these features is beyond the scope of this document. See the VMware documentation for NSX-T. In this design, the NSX-T Management Infrastructure is deployed during the initial vCenter Server cluster deployment in place of NSX-V.


NSX-T vs NSX-V

For vSphere Network NSX (NSX-V), review the following more well-known NSX-T objects with similar function to their NSX-V counterparts. Limitations and differences within a vSphere environment are also be discussed. Here is a table of typically used functions between T and V that correspond.

NSX-V or vSphere native NSX-T
NSX Virtual Distributed Switch (N-VDS)
Transport zone (overlay or VLAN-backed)
Segments
GENEVE (L2 encapsulation)
Tier-0 (T0) Gateway

1

Tier-1 (T1) Gateway

2

Transport Node (ESXi, KVM, bare metal T0 Gateway)

With NSX-T, you've Tier-0 (T0) Gateways and Tier-1 (T1) Gateways. While in the previous section they're shown as being equivalent to a T0 Gateway and T1 Gateway respectively, that's not accurate.

For NSX-T, there are two new concepts: Distributed Router (DR) and Service Router (SR).

Router Type Capabilities
Provides basic packet forwarding and distributed east-west routing functions
Spans all transport nodes
On ESXi runs as kernel module
Provides gateway services
- NAT
- Load Balancer
- Gateway firewall
- North-south routing
- VPN
- DNS Forwarding
- DHCP

There are key NSX-T concepts that do not correspond to NSX-V function that need to be understood for this design's implementation of NSX-T.


Resource requirements

In this design, the NSX-T controller Manager VMs are deployed on the management cluster. Additionally, each controller manager is assigned a VLAN–backed IP address from the private portable address block. The address block is designated for management components and configured with the DNS and NTP servers that are discussed in section 0. A summary of the NSX Manager installation is shown in following table.

The VMware Identity Manager appliance must be deployed by the customer manually if required for basic RBAC/AD integration. It can also provide multi-factor authentication (MFA), conditional access, and single sign-on (SSO) services. For more information, see VMware Identity Manager.

Attribute Specification
Three Virtual Appliances
6
24 GB
200 GB
Thin provisioned
Private A

The following figure shows the placement of the NSX managers in relation to the other components in this architecture.

Figure 1. NSX-T Manager network overview


Deployment considerations

With NSX-T on vSphere, the N-VDS must be assigned the physical adapters within the hosts. An N-VDS can be configured only within NSX-T Manager. Therefore, for redundancy to be maintained, no physical adapters are available for native local switch or vDS assignment in a cluster that houses both the NSX-T components and the associated overlay network components.

After initial deployment, the IBM Cloud automation deploys three NSX-T Manager virtual appliances within the management cluster. The controllers are assigned a VLAN–backed IP address from the Private A portable subnet that is designated for management components. Additionally, VM–VM anti–affinity rules are created such that controllers are separated among the hosts in the cluster.

You must deploy the management cluster with a minimum of three nodes to ensure high availability for the Manager / Controllers. In addition to the managers, the IBM Cloud automation prepares the deployed workload cluster as NSX-T transport nodes. The ESXi transport nodes are assigned a VLAN–backed IP address from the Private A portable IP address range that is specified by an NSX IP pool ranged derived from the VLAN and Subnet Summary. Transport node traffic resides on the untagged VLAN and is assigned to the private NSX-T virtual distributed switch (N-VDS).

Depending on the customer chosen NSX-T topology to be deployed, an NSX-T Edge Cluster is either deployed as a pair of VM or as software deployed on bare metal cluster nodes. Regardless of if the cluster pair is virtual or physical, uplinks are configured to N-VDS swit's for both IBM Cloud public and private networks.

The VMware Identity Manager appliance needs to be deployed by the customer manually and provides Active Directory Authentication, multi-factor authentication (MFA), conditional access, and single sign-on (SSO) services.

The following table summarizes the requirements for a medium size environment, which is the recommended starting size for production workloads.

Resources Manager x3 Edge cluster x4 Bare metal edge
Virtual appliance Virtual appliance Physical Server
6 4 8
24 GB 8 GB 8 GB
200 GB vSAN/management NFS 200 GB vSAN/management NFS 200 GB
Thin provisioned Thin provisioned Physical
Private A Private A Private A


Distributed switch design

The design uses a minimum number of vDS Swit's. The hosts in the management cluster are connected to the public and private networks. The hosts are configured with two distributed virtual swit's. The use of two swit's follows the practice of IBM Cloud network that separates the public and private networks. The hosts in the workload cluster are connected to the public and private networks. The hosts are configured with two NSX-T distributed virtual swit's (N-VDS). The following diagram shows the vDS and N-VDS design.

Figure 2. NSX-T Distributed switch design

VLANs are used to segment physical network functions. This design uses three VLANs: two for private network traffic and one for public network traffic. The following table shows the traffic separation.

VLAN Designation Traffic type
VLAN 1 Private A ESXi management, management, ESXi (TEP)
VLAN 2 Private B vSAN, NFS, NSX-T Edge (TEP), and vMotion
VLAN 3 Public Available for internet access


Naming conventions

The following naming conventions are used for deployment. For readability, only the specific naming is used. For example, "instance-name"-"dcname"-"clustername"-tz-edge-private is referred to as tz-edge-private.

Description Naming Standard
"instancename"-nsxt-ctrlmgr0
"instancename'-nsxt-ctrlmgr1
"instancename'-nsxt-ctrlmgr2
"instancename'-esxi-private-profile
"instancename'-esxi-public-profile
"instancename'-edge-private-profile
"instancename'-edge-public-profile
"instancename'-edge-tep-profile
"instancename'-'clustername'-nioc-vsan-private-profile
"instancename'-'clustername'-nioc-iscsi-private-profile
"instancename'-'clustername'-nioc-nfs-private-profile
'instancename'-"clustername'-nioc-public-profile
"instancename'-"dcname'-'clustername'-edge-cluster-profile
"instancename'-tz-esxi-private
"instancename'-tz-esxi-public
"instancename'-tz-vm-overlay
"instancename'-tz-edge-private
"instancename'-tz-edge-public
"instancename'-nvds-private
"instancename'-nvds-public
"instancename'-nvds-edge-private
"instancename'-nvds-edge-public
"instancename'-mgmt
"instancename'-"dcname'-nfs
"instancename'-"dcname'-vsan
"instancename'-"dcname'-vmotion
"instancename'-"dcname'-iscsi-a
"instancename'-"dcname'-iscsi-b
"instancename'-"dcname'-edge-mgmt
"instancename'-edge-private-trunk
"instancename'-edge-public-trunk
"instancename'-"dcname'-edge-tep-trunk
"instancename'-"dcname'-customer-t0-private
"instancename'-"dcname'-customer-to-public
"instancename'-customer-workload
"instancename'-"dcname'-'clustername'-'primary-vlan-id'-esxi-tep-pool
"instancename'-"dcname'-'clustername'-'secondary-vlan-id'-edge-tep-pool
"instancename'-"dcname'-'clustername'-esxi-tpn-profile
"instancename'-"dcname'-'clustername'-T0-xxx (specific to the function, such as: workload, OpenShift, HCX)
"instancename'-"dcname'-'clustername'-T1-xxx


Transport zones and N-VDS

Transport zones dictate which hosts and which VMs can participate in the use of a particular network. A transport zone limits the hosts that can "see" a logical switch and therefore, the VMs can be attached to the logical switch. A transport zone can span one or more host clusters. This design calls for transport zones to be created as in the following table:

Transport zone name VLAN/Geneve N-VDS name Uplink teaming policy
Geneve nvds-private Default
VLAN nvds-edge-public Default
VLAN nvds-public Default
VLAN nvds-private NFS, vSAN, iSCSI-A&B Default
VLAN nvds-edge-private Default


Transport nodes

Transport nodes define the physical server objects or VMs that participate in the virtual network fabric. Review the following table to understand the design.

Transport node type N-VDS Names Uplink profile IP assignment
nvds-private
nvds-public
esxi-private-profile
esxi-public-profile
esxi-tep-pool
nvds-edge-private
nvds-edge-public
nvds-private
edge-private-profile
edge-public-profile
edge-tep-profile
edge-tep-pool


Uplink profiles and teaming

An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical swit's or from NSX Edge nodes to top-of-rack swit's.

Uplink profile name VLAN Teaming policy Active uplinks Standby links MTU
default Default
Failover Order
uplink-1 uplink-2 9000
default Default
Loadbalance Source
uplink-1
uplink-2
1500
Default Default
Loadbalance Source
uplink-1
uplink-2
9000
default Management
Failover Order
uplink-1 uplink-2 9000
default vsan
Failover order
uplink-2 uplink-1 9000
default nfs
Failover order
uplink-2 uplink-1 9000
default vmotion
Failover order
uplink-1 uplink-2 9000
default iscsi-a
Failover order
uplink-1 9000
default iscsi-b
Failover order
Uplink-2 9000
default Load Balance source uplink-1 uplink-2 1500
Storage VLAN Failover order uplink-1 9000


VNI pools

Virtual Network Identifiers (VNIs) are similar to VLANs to a physical network. They are automatically created when a logical switch is created from a pool or range of IDs. This design uses the default VNI pool that is deployed with NSX-T.


Segments

An NSX-T segment reproduces switching functions, broadcast, unknown unicast, multicast (BUM) traffic, in a virtual environment that is decoupled from the underlying hardware.

Segment name VLAN Transport zone Uplink teaming policy
Mgmt default tz-esxi-private mgmt
NFS Tagged storage vlan tz-esxi-private NFS
vMotion Tagged storage vlan tz-esxi-private vMotion
vSAN Tagged storage vlan tz-esxi-private vSAN
iSCSI-A Tagged storage vlan tz-esxi-private iSCSI-A
iSCSi-B Tagged storage vlan tz-esxi-private iSCSi-B
edge-mgmt Tagged storage vlan tz-esxi-private Default
failover order
uplink-1
edge-private-trunk 0-4094 tz-esxi-private Default
failover order
uplink-1
edge-public-trunk 0-4094 tz-esxi-public Default
loadbalance source
edge-tep Tagged storage vlan tz-esxi-private TEP
T0-public Default tz-edge-public
T0-private Default tz-edge-private
customer-workload tz-vm-overlay


Edge cluster

Within this design, two virtual edge node clusters are provisioned, one for use by management and the other for customer workloads. There is a limitation of one T0 per Edge Transport Node, which means that a single Edge Node Cluster can support one T0 Gateway (in either Active/Standby or Active/Active). See the following figure which diagrams the functional components of an NSX-T edge services cluster.

Figure 3. NSX-T Edge cluster example of T0 to T1 scale

Figure 4. Management T0 gateway

Tier 0 logical gateway

An NSX-T Tier-0 logical router provides an on and off gateway service between the logical and physical network. For this design, multiple T-0 gateways are deployed for the needs of management, add-on products, and optionally for customer chosen topologies.

Tier 1 logical gateway

An NSX-T Tier-1 logical gateway has downlink ports to connect to NSX-T data center logical swit's and uplink ports to connect to NSX-T data center tier-0 logical routers only. They run in the kernel level of the hypervisor they are configured for and not as a virtual or physical machine. For this design, one or more T-1 logical gateways are created for the needs of customer chosen topologies, although a T-1 logical gateway is not always needed, segments can be attached directly to a T-0.

Tier 1 to Tier 0 route advertisement

To provide Layer 3 connectivity between VMs connected to logical swit's that are attached to different tier-1 logical gateways, it is necessary to enable tier-1 route advertisement towards tier-0. No need to configure a routing protocol or static routes between tier-1 and tier-0 logical routers. NSX-T creates static routes automatically when you enable route advertisement. For this design, route advertisement is always enabled for any IBM Cloud for VMware Solutions automation created T-1 gateways.


Preconfigured topologies

Workload to T1 to T0 gateway – virtual edge services cluster

Figure 5. NSX-T deployed topology virtual T0 Edge Gateway

Topology 1 is basically the same topology that is deployed with NSX-V DLR and Edge gateways. With NSX-T, no dynamic routing protocol configuration between T1 and T0. RFC-1891 IP address space is used for the workload overlay network and transit overlay network. A customer private and public portable IP space is assigned for customer use. A customer designated IBM Cloud private and public portable IP space is assigned to the T0 for customer use.

Next topic: VMware Identity Manager