NSX-T Manager Controller provides the graphical user interface (GUI) and the RESTful API for creating, configuring, and monitoring NSX-T components, such as logical switches. NSX-T Manager Controller implements the management plane for the NSX-T infrastructure. It provides an aggregated system view and is the centralized network management component of NSX-T. It provides a method for monitoring and troubleshooting workloads attached to virtual networks. It provides configuration and orchestration of the following services:
- Logical networking components, such as logical switching and routing
- Networking and edge services
- Security services and distributed firewall
An NSX-T Manager Controller controls virtual networks and overlay transport tunnels. For stability and reliability of data transport, the NSX-T Manager Controller is deployed as a cluster of three highly available virtual appliances that are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture. The CCP is logically separated from all data plane traffic, that is, a failure in the control plane does not affect existing data plane operations. The controller provides configuration to other NSX-T Controller components such as the logical switches, logical routers, and edge virtual machine configuration.
Logical Switch is a broadcast domain which can span across multiple compute hypervisors. VMs in the same subnet would connect to the same logical switch.
Edge nodes are appliances with a pool of capacity to run the centralized services and would be an on/off ramp to the physical infrastructure. You can think of Edge node as an empty container which would host one or multiple Logical routers to provide centralized services and connectivity to physical routers. Edge node will be a transport node just like compute node and will also have a TEP IP to terminate overlay tunnels.
NSX-T Virtual Distributed Switch
An NSX-T Virtual Distributed Switch (N-VDS) runs on ESXi hosts and provides physical traffic forwarding. It transparently provides the underlying forwarding service that each logical switch relies on. To achieve network virtualization, a network controller must configure the ESXi host virtual switch with network flow tables that form the logical broadcast domains the tenant administrators define when they create and configure logical switches.
NSX-T implements each logical broadcast domain by tunneling VM-to-VM traffic and VM-to-logical router traffic using the Geneve tunnel encapsulation mechanism. The network controller has a global view of the data center and ensures that the ESXi host virtual switch flow tables are updated as VMs are created, moved, or removed.
Logical Routers
NSX-T logical routers provide North-South connectivity so that workloads can access external networks, and East-West connectivity between different logical networks.
A logical router is a configured partition of a traditional network hardware router. It replicates the functionality of the hardware, creating multiple routing domains in a single router. Logical routers perform a subset of the tasks that are handled by the physical router, and each can contain multiple routing instances and routing tables. Using logical routers can be an effective way to maximize router use, because a set of logical routers within a single physical router can perform the operations previously performed by several pieces of equipment.
Distributed router (DR)
A DR spans ESXi hosts whose virtual machines are connected to this logical router, and edge nodes the logical router is bound to. Functionally, the DR is responsible for one-hop distributed routing between logical switches and logical routers connected to this logical router.
One
or more (optional) service routers (SR).
An SR is responsible for delivering services that are not currently implemented
in a distributed fashion, such as stateful NAT.
A logical router always has a DR. A logical router has SRs when it is a Tier-0 router, or when it is a Tier-1 router and has routing services configured such as NAT or DHCP.
Tunnel Endpoint
Tunnel endpoints enable ESXi hosts to participate in an NSX-T overlay. The NSX-T overlay deploys a Layer 2 network on top of an existing Layer 3 network fabric by encapsulating frames inside packets and transferring the packets over an underlying transport network. The underlying transport network can be another Layer 2 networks or it can cross Layer 3 boundaries. The Tunnel Endpoint (TEP) is the connection point at which the encapsulation and decapsulation take place.
Logical Load Balancer
The NSX-T logical load balancer offers high-availability service for applications and distributes the network traffic load among multiple servers. The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and determines which pool server to use. Logical load balancer is supported only on the Tier-1 logical router.
Compute Manager
A compute manager is an application that manages resources such as hosts and VMs. One example is vCenter Server.
Control Plane
Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines.
Data Plane
Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics.
Overlay Logical Network
Logical network implemented using Layer 2-in-Layer 3 tunneling such that the topology seen by VMs is decoupled from that of the physical network.
Physical Interface (pNIC)
Network interface on a physical server that a hypervisor is installed on.
Transport Zone
Collection of transport nodes that defines the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors.
Transport Node
A node capable of participating in an NSX-T overlay or NSX-T VLAN networking. For a KVM host, you can preconfigure the N-DVS, or you can have NSX Manager perform the configuration. For an ESXi host, NSX Manager always configures the N-VDs.
VM Interface (vNIC)
Network interface on a virtual machine that provides connectivity between the virtual guest operating system and the standard vSwitch or vSphere distributed switch. The vNIC can be attached to a logical port. You can identify a vNIC based on its Unique ID (UUID).