Overlay Networking in Nutanix Flow VPC: Geneve and Encapsulation Walkthrough

Introduction

Overlay networking is the backbone of modern cloud and datacenter designs. With Nutanix Flow VPC, the platform has evolved from VXLAN to Geneve encapsulation to deliver next-generation virtual network overlays. Geneve enables more extensible, SDN-friendly, and future-proof networking for Nutanix environments. But how does this work at the packet level? Let’s take a “day in the life of a packet” journey—following a frame from a VM through every overlay layer, revealing exactly how Nutanix Flow VPC and Geneve encapsulation power secure, scalable, and flexible virtual networking.


1. Overlay Networking and Geneve in Nutanix Flow VPC

Nutanix Flow VPC uses overlay networking to decouple tenant connectivity from physical underlay topology. This approach provides scalable, isolated, and highly programmable networks. The adoption of Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN brings more flexibility and extensibility to Nutanix networking.

Key Benefits:

  • Greater scalability and extensibility compared to VXLAN
  • Supports metadata-rich overlays for advanced SDN and microsegmentation
  • Seamless, software-defined network segmentation for tenants and applications

Core Components in Nutanix Flow VPC:

  • Virtual Private Cloud (VPC): Logical network container, includes subnets and routing policies
  • Geneve Tunnel Endpoints (GTEPs): Implemented on each AHV host; handle Geneve encapsulation/decapsulation
  • Underlay Network: Physical data center fabric carrying encapsulated overlay traffic

2. Geneve Encapsulation: Packet Structure Deep Dive

With Nutanix Flow VPC, Geneve is the encapsulation protocol. This allows not just L2 over L3 tunneling, but also carries additional metadata for advanced SDN features.

Geneve Encapsulation Stack

A Geneve-encapsulated packet contains multiple protocol layers:

Geneve Encapsulation

Geneve Header Field Breakdown

Geneve headers are 8 bytes minimum with support for variable-length options. Here’s the basic breakdown:

FieldSize (bits)Description
Version2Protocol version (0 currently)
Opt Len6Option length (4-byte words)
OAM1OAM flag
Critical1Critical option present
Reserved6Always 0
Protocol Type16Inner payload EtherType
VNI24Virtual Network Identifier
Reserved8Always 0
OptionsVariableOptional TLVs (metadata/telemetry)

Sample Geneve Header (Wireshark hex):

00 00 65 58 12 34 56 00 [options...]
  • 00 = Version 0, no options, flags unset
  • 00 65 = Protocol type (0x6558 = Ethernet)
  • 12 34 56 = VNI 0x123456
  • 00 = Reserved

3. Day in the Life of a Packet: Geneve Edition

Let’s follow a packet as it travels through Nutanix Flow VPC using Geneve encapsulation:

3.1. Packet Creation in Source VM

  • The VM’s guest OS generates a standard Layer 2 Ethernet frame (e.g., ARP, ICMP, TCP).
  • This frame is sent out the VM’s vNIC, directly connected to the AHV host’s Open vSwitch (OVS) bridge.

3.2. Ingress: OVS/GTEP Receives and Inspects Frame

  • OVS determines if the destination is a local VM (same AHV host and VNI) or remote.
    • Local: Frame is switched directly to the target VM’s vNIC.
    • Remote: Overlay encapsulation is required.

3.3. Geneve Encapsulation at Source GTEP

  • OVS consults its flow tables and VNI-to-GTEP mapping (managed by Nutanix SDN controller).
  • Encapsulation occurs as follows:
    • Outer Ethernet: Src MAC = Host’s uplink, Dst MAC = next-hop switch/router
    • Outer IP: Src IP = local GTEP IP, Dst IP = remote GTEP IP
    • UDP Header: Src port = random high port, Dst port = 6081 (Geneve)
    • Geneve Header: Includes the VNI (maps to VPC/subnet) and protocol type
    • Geneve Options (if used): May include security metadata, telemetry, or custom TLVs
    • Inner Ethernet: Original VM frame

Geneve Encapsulated Packet

3.4. Underlay Network Transit

  • The encapsulated packet is forwarded over the physical (underlay) network.
  • Only the outer headers are visible to underlay devices; the inner (overlay) frame is opaque.
  • Routing and switching are based solely on the outer IP and Ethernet headers.

3.5. Destination GTEP: Decapsulation and Frame Delivery

  • The remote AHV host’s GTEP (OVS) receives the packet on UDP port 6081.
  • OVS identifies the Geneve tunnel via the VNI and removes the outer headers.
  • Any Geneve options (if present) are processed according to Nutanix Flow SDN logic.
  • The original Ethernet frame is delivered to the destination VM’s vNIC.

3.6. OVS Flow Table Logic, Routing, and Security

  • VTEP Table (Geneve): Tracks which GTEPs (hosts) are members of each VNI.
  • Flow Table: Controls encapsulation, decapsulation, forwarding, and enforces microsegmentation policies.
  • Option Handling: Geneve options allow for additional security, telemetry, or tenant information to be enforced per policy.

Decision Points at Every Step:

  1. Local vs remote destination (direct forward or encapsulate)
  2. VNI-to-GTEP mapping lookup
  3. OVS flow rules: output port, encapsulation actions
  4. Security and microsegmentation policy enforcement (pre/post encapsulation)
  5. Handling of Geneve TLVs if used

3.7. Delivery to Target VM

  • The VM receives the original Ethernet frame, completely unaware of any overlay or Geneve encapsulation.
  • Application communication is seamless across the overlay network.

End-to-End Geneve Flow

3.8. Advanced Nuances Unique to Geneve

  • Metadata and TLVs: Geneve supports optional metadata fields that can carry policy information, security tags, telemetry, or tenant/service data.
  • Microsegmentation Integration: These fields enable even finer-grained SDN control, as Nutanix Flow VPC evolves to support richer feature sets.
  • Future-proofing: Geneve’s extensibility ensures overlay networking in Nutanix Flow can adapt to new requirements without protocol redesign.

4. Geneve in Wireshark: Sample Packet Walkthrough

To verify Geneve overlay traffic in Nutanix Flow VPC, examine a Geneve-encapsulated packet with Wireshark.

Wireshark Layer View:

Frame 320: 170 bytes on wire
Ethernet II, Src: Host1_MAC, Dst: Switch_MAC
IP, Src: 10.10.10.5, Dst: 10.10.10.11
UDP, Src Port: 54001, Dst Port: 6081
Geneve Header, VNI: 20010, Options: None
Ethernet II, Src: VM1_MAC, Dst: VM2_MAC
IP (inner), Src: 192.168.100.50, Dst: 192.168.100.90
TCP Payload

What to check:

  • UDP Port 6081 confirms Geneve usage.
  • Geneve VNI identifies the overlay segment.
  • Inner Ethernet/IP shows the original VM-to-VM communication, preserved end-to-end.

5. End-to-End Packet Flow

Packet is encapsulated at source host, routed over underlay, decapsulated at destination, and delivered to the target VM.


6. Geneve vs VXLAN: Key Differences for Nutanix Flow VPC

FeatureVXLAN (Legacy)Geneve (Flow VPC)
UDP Port47896081
Header Size8 bytes8+ bytes (with options)
Metadata/OptionsNoneVariable TLVs
ExtensibilityLimitedVery high
Nutanix UsageFlow NetworksFlow VPC (current)

7. Summary

With the shift to Geneve encapsulation in Nutanix Flow VPC, Nutanix delivers a more extensible, secure, and future-ready overlay networking platform. Understanding every encapsulation step and packet flow is essential for architecture, design validation, and troubleshooting. By mastering Geneve overlays, architects and engineers can unlock the full potential of Nutanix networking—today and for evolving SDN use cases.

Disclaimer: The views expressed in this article are those of the author and do not represent the opinions of Nutanix, my employer or any affiliated organization. Always refer to the official Nutanix documentation before production deployment.

 

Leave a Reply

Discover more from Digital Thought Disruption

Subscribe now to keep reading and get access to the full archive.

Continue reading