NSX-T Architecture
NSX-T has a built-in separation for Management plane (NSX-T Manager), Control Plane (Controllers) and Data Plane (Hypervisors, Containers etc.). I highly recommend going through NSX-T Whitepaper for detailed information on architecture to understand the components and functionality of each of the planes.
Couple of interesting points that I want to highlight about the architecture:
- NSX-T Manager is decoupled from vCenter and is designed to run across all these heterogeneous platforms.
- NSX-T Controllers serve as central control point for all the logical switches within a network and maintains information about hosts, logical switches/routers.
- NSX-T Manager and NSX-T Controllers can be deployed in a VM form factor on either ESXi or KVM.
- In order to provide networking to different type of compute nodes, NSX-T relies on a virtual switch called “hostswitch”. The NSX management plane fully manages the lifecycle of this “hostswitch”. This hostswitch is a variant of the VMware virtual switch on ESXi-based endpoints and as Open Virtual Switch (OVS) on KVM-based endpoints.
- Data Plane stretches across a variety of compute nodes: ESXi, KVM, Containers, and NSX-T edge nodes (on/off ramp to physical infrastructure).
- Each of the compute nodes is a transport node & will be a TEP (Tunnel End Point) for the host. Depending upon the teaming policy, this host could have one or more TEPs.
- NSX-T uses GENEVE as underlying overlay protocol for these TEPs to carry Layer 2 information across Layer 3. GENEVE provides us the complete flexibility of inserting Metadata as TLV (Type, Length, Value) fields which could be used for new features. One of the examples of this Metadata is VNI (Virtual Network Identifier). We recommend a MTU of 1600 to account for encapsulation header. More details on GENEVE can be found on the following IETF Draft. https://datatracker.ietf.org/doc/draft-ietf-nvo3-geneve/
Before we dive deep into routing, let me define a few key terms.
Logical Switch is a broadcast domain which can span across multiple compute hypervisors. VMs in the same subnet would connect to the same logical switch.
Logical Router provides North-South, East-West routing between different subnets & has two components: Distributed component that runs as a kernel module in hypervisor and Centralized component to take care of centralized functions like NAT, DHCP, LB and provide connectivity to physical infrastructure.
Types of interfaces on a Logical Router
- Downlink- Interface connecting to a Logical switch.
- Uplink– Interface connecting to the physical infrastructure/physical router.
- RouterLink– Interface connecting two Logical routers.
Edge nodes are appliances with a pool of capacity to run the centralized services and would be an on/off ramp to the physical infrastructure. You can think of Edge node as an empty container which would host one or multiple Logical routers to provide centralized services and connectivity to physical routers. Edge node will be a transport node just like compute node and will also have a TEP IP to terminate overlay tunnels.
They are available in two form factor: Bare Metal or VM(leveraging Intel’s DPDK Technology).
Moving on, let’s also get familiarized with the topology that I will use throughout this blog series.