Capacity Planning for Azure Local SDN: NIC Teams, SLB Limits, vSwitch Scaling

Summary In the evolving landscape of hybrid cloud, capacity planning is critical for deploying resilient and high-performance Software Defined Networking (SDN) infrastructures. Azure Local SDN (formerly Azure Stack HCI) enables enterprise-grade connectivity on-premises while integrating with the broader Azure ecosystem. This article dives deep into how to plan capacity for Azure Local SDN deployments, covering NIC teaming strategies, SLB limitations, virtual switch scaling, and performance features like jumbo frames, RDMA, and dual-stack support.


Why Capacity Planning Matters in Azure Local SDN

Capacity planning ensures the SDN infrastructure aligns with workload demands, high availability goals, and future scalability. Poor planning can lead to bottlenecks in virtual switch throughput, SLB rule exhaustion, and underutilized NIC bandwidth.

Key reasons to prioritize capacity planning:

  • Prevent SLB rule exhaustion in high-traffic environments
  • Ensure NIC teams deliver expected redundancy and throughput
  • Maintain optimal VM-to-host density without performance trade-offs
  • Align with host RDMA and jumbo frame configurations for low latency

NIC Teaming: Throughput, Modes, and Queue Depth

Teaming Modes:

  • Switch Independent: No switch configuration needed. Best for maximum compatibility.
  • Static Teaming: Requires static configuration on both NIC and switch.
  • LACP: Dynamic link aggregation. Offers load balancing but requires switch support.

Key Considerations:

ParameterSwitch IndependentStaticLACP
RedundancyYesYesYes
Load BalancingDynamicStaticDynamic
RDMA SupportPartial (depends)LimitedPreferred
Host Queue ScalingYes (with RSS/VMQ)YesYes
  • Jumbo Frames: Enable for 9K MTU to reduce CPU usage in large packet transfers.
  • RDMA (RoCEv2): Bypass CPU for storage traffic; must be offload-capable NICs.
  • Queue Depth: Use PowerShell to validate RSS/VMQ queues:
Get-NetAdapterRss | Select Name, NumberOfReceiveQueues

SLB Limits: Throughput, Rules, and Design

SLB Characteristics in Azure Local:

  • Distributed data path
  • Centralized management
  • Operates within the Network Controller fabric

Key SLB Capacity Limits:

MetricLimit (2025)
Max SLB Rules per Host10,000
Max Concurrent Connections250,000 per SLB MUX
Max Throughput40 Gbps per host

PowerShell Check:

Get-SdnLoadBalancerMux | Select Name, RuleCount, MaxConnections

SLB Design Tips:

  • Group rules per application port range
  • Use shared frontends to minimize duplication
  • Monitor rule health with SDN diagnostics logs

vSwitch Scaling: VM Density, Isolation, and Overlay Limits

Virtual Switch Characteristics:

  • Hosted by Hyper-V
  • Supports VLANs, SR-IOV, RDMA, NVGRE/VxLAN overlays

vSwitch Capacity Planning:

ResourceLimit
Max vNICs per vSwitch1024
VLANs per vSwitch4096
SR-IOV Enabled VMs64 per host

Bicep: Create SDNv2 Compatible vSwitch

resource vSwitch 'Microsoft.Network/virtualSwitches@2022-11-01' = {
  name: 'SDNv2HostSwitch'
  location: resourceGroup().location
  properties: {
    type: 'External'
    managementMode: 'HyperVHost'
  }
}

RDMA + Jumbo:

  • Ensure jumbo frames on vSwitch and physical NICs:
Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet"
  • RDMA and SR-IOV require hardware validation with Get-NetAdapterHardwareInfo

IPv4/IPv6 Dual-Stack Design

Azure Local SDN supports both IPv4 and IPv6 across SLB, routes, and VNETs.

Key Guidelines:

  • Dual-stack VMs need distinct SLB rules per protocol
  • Ensure route propagation is enabled for both stacks
  • Validate DNS6 entries in SDN name resolution

PowerShell Dual-Stack SLB Sample:

New-SdnLoadBalancerRule -Name WebV6 -Protocol TCP -FrontendPort 443 -BackendPort 443 -FrontendIP "2001:db8::1" -BackendPool $pool -ResourceGroup "SDN-RG"

Benchmarks & Planning Tables

NIC Bandwidth by Team Config:

NIC CountTeaming ModeTotal Bandwidth
2 x 10GbSwitch Independent20 Gbps
4 x 25GbLACP100 Gbps

vSwitch Density:

Host SizeMax VMsVM/vNIC per vSwitch
512 GB RAM1004 per VM

SLB Rule Guidelines:

  • Plan 1 SLB rule per app endpoint (or use ranges)
  • Reserve headroom: don’t exceed 80% of max rule count

Best Practices

  1. Monitor NIC Queue Utilization: Use Perfmon or SDN Express insights.
  2. Pre-stage SLB Rules: Avoid on-the-fly rule creation under load.
  3. Enable Jumbo Frames + RDMA for East-West Storage Traffic
  4. Baseline Testing: Use NTttcp, VM Fleet, or NetPerf to stress test SLB+vSwitch
  5. Segment VMs by Role: Separate control plane, SLB MUX, and workloads

Final Thoughts

Capacity planning is foundational for resilient Azure Local SDN design. Understanding NIC teaming modes, scaling SLB efficiently, and leveraging performance features like RDMA and jumbo frames can unlock significant gains in throughput and reliability. By validating queue depths, configuring dual-stack rules, and staying within supported limits, you can deliver enterprise-grade networking across hybrid workloads.


Disclaimer

This guidance is based on Azure Local SDN capabilities as of July 2025. Always validate hardware compatibility and limits using Microsoft’s latest documentation.

Leave a Reply

Discover more from Digital Thought Disruption

Subscribe now to keep reading and get access to the full archive.

Continue reading