Site icon Digital Thought Disruption

Capacity Planning for Azure Local SDN: NIC Teams, SLB Limits, vSwitch Scaling

Summary In the evolving landscape of hybrid cloud, capacity planning is critical for deploying resilient and high-performance Software Defined Networking (SDN) infrastructures. Azure Local SDN (formerly Azure Stack HCI) enables enterprise-grade connectivity on-premises while integrating with the broader Azure ecosystem. This article dives deep into how to plan capacity for Azure Local SDN deployments, covering NIC teaming strategies, SLB limitations, virtual switch scaling, and performance features like jumbo frames, RDMA, and dual-stack support.


Why Capacity Planning Matters in Azure Local SDN

Capacity planning ensures the SDN infrastructure aligns with workload demands, high availability goals, and future scalability. Poor planning can lead to bottlenecks in virtual switch throughput, SLB rule exhaustion, and underutilized NIC bandwidth.

Key reasons to prioritize capacity planning:


NIC Teaming: Throughput, Modes, and Queue Depth

Teaming Modes:

Key Considerations:

ParameterSwitch IndependentStaticLACP
RedundancyYesYesYes
Load BalancingDynamicStaticDynamic
RDMA SupportPartial (depends)LimitedPreferred
Host Queue ScalingYes (with RSS/VMQ)YesYes
Get-NetAdapterRss | Select Name, NumberOfReceiveQueues

SLB Limits: Throughput, Rules, and Design

SLB Characteristics in Azure Local:

Key SLB Capacity Limits:

MetricLimit (2025)
Max SLB Rules per Host10,000
Max Concurrent Connections250,000 per SLB MUX
Max Throughput40 Gbps per host

PowerShell Check:

Get-SdnLoadBalancerMux | Select Name, RuleCount, MaxConnections

SLB Design Tips:


vSwitch Scaling: VM Density, Isolation, and Overlay Limits

Virtual Switch Characteristics:

vSwitch Capacity Planning:

ResourceLimit
Max vNICs per vSwitch1024
VLANs per vSwitch4096
SR-IOV Enabled VMs64 per host

Bicep: Create SDNv2 Compatible vSwitch

resource vSwitch 'Microsoft.Network/virtualSwitches@2022-11-01' = {
  name: 'SDNv2HostSwitch'
  location: resourceGroup().location
  properties: {
    type: 'External'
    managementMode: 'HyperVHost'
  }
}

RDMA + Jumbo:

Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet"

IPv4/IPv6 Dual-Stack Design

Azure Local SDN supports both IPv4 and IPv6 across SLB, routes, and VNETs.

Key Guidelines:

PowerShell Dual-Stack SLB Sample:

New-SdnLoadBalancerRule -Name WebV6 -Protocol TCP -FrontendPort 443 -BackendPort 443 -FrontendIP "2001:db8::1" -BackendPool $pool -ResourceGroup "SDN-RG"

Benchmarks & Planning Tables

NIC Bandwidth by Team Config:

NIC CountTeaming ModeTotal Bandwidth
2 x 10GbSwitch Independent20 Gbps
4 x 25GbLACP100 Gbps

vSwitch Density:

Host SizeMax VMsVM/vNIC per vSwitch
512 GB RAM1004 per VM

SLB Rule Guidelines:


Best Practices

  1. Monitor NIC Queue Utilization: Use Perfmon or SDN Express insights.
  2. Pre-stage SLB Rules: Avoid on-the-fly rule creation under load.
  3. Enable Jumbo Frames + RDMA for East-West Storage Traffic
  4. Baseline Testing: Use NTttcp, VM Fleet, or NetPerf to stress test SLB+vSwitch
  5. Segment VMs by Role: Separate control plane, SLB MUX, and workloads

Final Thoughts

Capacity planning is foundational for resilient Azure Local SDN design. Understanding NIC teaming modes, scaling SLB efficiently, and leveraging performance features like RDMA and jumbo frames can unlock significant gains in throughput and reliability. By validating queue depths, configuring dual-stack rules, and staying within supported limits, you can deliver enterprise-grade networking across hybrid workloads.


Disclaimer

This guidance is based on Azure Local SDN capabilities as of July 2025. Always validate hardware compatibility and limits using Microsoft’s latest documentation.

Exit mobile version