
Summary In the evolving landscape of hybrid cloud, capacity planning is critical for deploying resilient and high-performance Software Defined Networking (SDN) infrastructures. Azure Local SDN (formerly Azure Stack HCI) enables enterprise-grade connectivity on-premises while integrating with the broader Azure ecosystem. This article dives deep into how to plan capacity for Azure Local SDN deployments, covering NIC teaming strategies, SLB limitations, virtual switch scaling, and performance features like jumbo frames, RDMA, and dual-stack support.
Why Capacity Planning Matters in Azure Local SDN
Capacity planning ensures the SDN infrastructure aligns with workload demands, high availability goals, and future scalability. Poor planning can lead to bottlenecks in virtual switch throughput, SLB rule exhaustion, and underutilized NIC bandwidth.
Key reasons to prioritize capacity planning:
- Prevent SLB rule exhaustion in high-traffic environments
- Ensure NIC teams deliver expected redundancy and throughput
- Maintain optimal VM-to-host density without performance trade-offs
- Align with host RDMA and jumbo frame configurations for low latency
NIC Teaming: Throughput, Modes, and Queue Depth
Teaming Modes:
- Switch Independent: No switch configuration needed. Best for maximum compatibility.
- Static Teaming: Requires static configuration on both NIC and switch.
- LACP: Dynamic link aggregation. Offers load balancing but requires switch support.
Key Considerations:
| Parameter | Switch Independent | Static | LACP |
|---|---|---|---|
| Redundancy | Yes | Yes | Yes |
| Load Balancing | Dynamic | Static | Dynamic |
| RDMA Support | Partial (depends) | Limited | Preferred |
| Host Queue Scaling | Yes (with RSS/VMQ) | Yes | Yes |
- Jumbo Frames: Enable for 9K MTU to reduce CPU usage in large packet transfers.
- RDMA (RoCEv2): Bypass CPU for storage traffic; must be offload-capable NICs.
- Queue Depth: Use PowerShell to validate RSS/VMQ queues:
Get-NetAdapterRss | Select Name, NumberOfReceiveQueues
SLB Limits: Throughput, Rules, and Design
SLB Characteristics in Azure Local:
- Distributed data path
- Centralized management
- Operates within the Network Controller fabric
Key SLB Capacity Limits:
| Metric | Limit (2025) |
| Max SLB Rules per Host | 10,000 |
| Max Concurrent Connections | 250,000 per SLB MUX |
| Max Throughput | 40 Gbps per host |
PowerShell Check:
Get-SdnLoadBalancerMux | Select Name, RuleCount, MaxConnections
SLB Design Tips:
- Group rules per application port range
- Use shared frontends to minimize duplication
- Monitor rule health with SDN diagnostics logs
vSwitch Scaling: VM Density, Isolation, and Overlay Limits
Virtual Switch Characteristics:
- Hosted by Hyper-V
- Supports VLANs, SR-IOV, RDMA, NVGRE/VxLAN overlays
vSwitch Capacity Planning:
| Resource | Limit |
| Max vNICs per vSwitch | 1024 |
| VLANs per vSwitch | 4096 |
| SR-IOV Enabled VMs | 64 per host |
Bicep: Create SDNv2 Compatible vSwitch
resource vSwitch 'Microsoft.Network/virtualSwitches@2022-11-01' = {
name: 'SDNv2HostSwitch'
location: resourceGroup().location
properties: {
type: 'External'
managementMode: 'HyperVHost'
}
}
RDMA + Jumbo:
- Ensure jumbo frames on vSwitch and physical NICs:
Get-NetAdapterAdvancedProperty -DisplayName "Jumbo Packet"
- RDMA and SR-IOV require hardware validation with Get-NetAdapterHardwareInfo
IPv4/IPv6 Dual-Stack Design
Azure Local SDN supports both IPv4 and IPv6 across SLB, routes, and VNETs.
Key Guidelines:
- Dual-stack VMs need distinct SLB rules per protocol
- Ensure route propagation is enabled for both stacks
- Validate DNS6 entries in SDN name resolution
PowerShell Dual-Stack SLB Sample:
New-SdnLoadBalancerRule -Name WebV6 -Protocol TCP -FrontendPort 443 -BackendPort 443 -FrontendIP "2001:db8::1" -BackendPool $pool -ResourceGroup "SDN-RG"
Benchmarks & Planning Tables
NIC Bandwidth by Team Config:
| NIC Count | Teaming Mode | Total Bandwidth |
| 2 x 10Gb | Switch Independent | 20 Gbps |
| 4 x 25Gb | LACP | 100 Gbps |
vSwitch Density:
| Host Size | Max VMs | VM/vNIC per vSwitch |
| 512 GB RAM | 100 | 4 per VM |
SLB Rule Guidelines:
- Plan 1 SLB rule per app endpoint (or use ranges)
- Reserve headroom: don’t exceed 80% of max rule count
Best Practices
- Monitor NIC Queue Utilization: Use
Perfmonor SDN Express insights. - Pre-stage SLB Rules: Avoid on-the-fly rule creation under load.
- Enable Jumbo Frames + RDMA for East-West Storage Traffic
- Baseline Testing: Use NTttcp, VM Fleet, or NetPerf to stress test SLB+vSwitch
- Segment VMs by Role: Separate control plane, SLB MUX, and workloads
Final Thoughts
Capacity planning is foundational for resilient Azure Local SDN design. Understanding NIC teaming modes, scaling SLB efficiently, and leveraging performance features like RDMA and jumbo frames can unlock significant gains in throughput and reliability. By validating queue depths, configuring dual-stack rules, and staying within supported limits, you can deliver enterprise-grade networking across hybrid workloads.
Disclaimer
This guidance is based on Azure Local SDN capabilities as of July 2025. Always validate hardware compatibility and limits using Microsoft’s latest documentation.