Skip to content

Nuface Blog

隨意隨手記 Casual Notes

Menu
  • Home
  • About
  • Services
  • Blog
  • Contact
  • Privacy Policy
  • Login
Menu

Network Interface Performance Considerations When Mounting External NAS Storage on Proxmox

Posted on 2025-11-032025-11-03 by Rico

✅ Network card performance directly affects the transfer speed and stability of external storage.

This is one of the most critical factors when evaluating the use of external NAS storage with Proxmox VE.

Below, I’ll break down the reasons, bottlenecks, recommended architecture, and hardware selection for you in detail.


🧭 1. Why Does NIC Performance Affect NAS Access Speed?

When Proxmox VE mounts external NAS storage via NFS / iSCSI / SMB,
all VM or container disk I/O (read/write) operations are transmitted over the network.

In other words:

VM disk I/O = network packets

Therefore, network bandwidth and latency define the upper limit and stability of disk performance.


⚙️ 2. Real-World Bottleneck Analysis

Network InterfaceTheoretical BandwidthActual Usable Speed (after protocol overhead)Approximate Disk Performance
1 GbE125 MB/s100–110 MB/s≈ SATA SSD single-drive max
2.5 GbE312 MB/s270–290 MB/sSlightly better than SATA SSD, good for small/medium VMs
10 GbE1250 MB/s900–1100 MB/sClose to NVMe SSD performance, supports multiple VMs
40 GbE / 100 GbEEnterprise-classArchitecture-dependentData center level

⚠️ If multiple VMs are simultaneously reading/writing to the external NAS, 1 GbE will almost certainly become the bottleneck.


🧩 3. Key Performance Parameters and Recommendations

ParameterImpactRecommended Practice
Bandwidth (Gbps)Defines max throughputAt least 2.5 GbE; use 10 GbE for many VMs or database workloads
LatencyAffects IOPS (small-file performance)Use direct connections or fiber, avoid multi-hop routing
Switch PerformanceLow-end switches increase jitterUse managed switches supporting Jumbo Frame
NAS Drive TypeHDD vs. SSDSSD/Hybrid Cache significantly boosts speed
Protocol ChoiceNFS vs. iSCSI vs. SMBiSCSI = highest performance; NFS = balanced; SMB = slowest
NIC DriverIntel / Broadcom / RealtekIntel is most stable; Realtek often has higher latency and lower throughput
Link Aggregation (LACP)Increases bandwidth and redundancyIdeal for multi-VM NAS access

🧮 4. Recommended Network Architectures (PVE + NAS)

1️⃣ Small-to-Medium Business Setup

ComponentRecommended Specification
PVE HostAt least 2 NICs:
1. Management / External Network
2. Dedicated Storage Network (NFS / iSCSI)
NASSupports 2.5G / 10G NIC (PCIe expansion if needed)
SwitchManaged switch supporting VLAN and Jumbo Frame (MTU 9000)
ProtocolPrefer NFS v3/v4; use iSCSI for higher performance

Example topology:

[VM Network] ─ 1 GbE ─▶ [Switch]
[PVE ↔ NAS Storage Network] ─ 10 GbE ─▶ [Switch]

2️⃣ Large Environments (High-Availability Cluster)

ComponentConfiguration
PVE Cluster (3+ nodes)Each node has:
• Management NIC (1 GbE)
• Cluster Communication NIC (1 GbE)
• Storage NIC (10 GbE)
NASDual 10 GbE NICs with Bonding (LACP)
SwitchSupports VLAN + LACP + Jumbo Frame

⚡ 5. Performance Optimization Tips (Practical Experience)

OptimizationRecommended SettingEffect
Enable Jumbo Frame (MTU 9000)Configure on PVE / NAS / SwitchReduces CPU load, increases throughput
NFS Mount Optionsrsize=1048576,wsize=1048576,nfsvers=3,tcpImproves transfer efficiency
iSCSI MultipathInstall multipath-tools on PVEEnhances fault tolerance and load balancing
Dedicated VLAN for StorageSeparate storage from general trafficReduces packet interference
NAS SSD Cache / NVMe PoolHot data cachingGreatly improves IOPS
NIC SelectionIntel X550 / Mellanox ConnectX seriesStable drivers, low latency

🧠 6. Practical Hardware Recommendations (Synology + PVE Example)

Use CaseRecommended SetupDescription
General SMBs2.5 GbE + NFSLow cost, quick setup
Multiple VMs / Heavy Data10 GbE NFS or iSCSIStable performance, low latency
High-Availability Cluster10 GbE iSCSI + LACP + MultipathEnterprise-grade architecture
Backup Only1 GbE SMB/NFSPerformance not critical

✅ Final Recommendations

To ensure stable and high-speed performance when using external NAS with Proxmox VE:

  • Use at least 2.5 GbE or higher NICs
  • Separate storage and regular traffic (dedicated VLAN)
  • Enable Jumbo Frame (MTU 9000) on Switch and NAS
  • Choose NFS for simplicity, iSCSI for performance
  • Use Intel or Mellanox NICs — avoid Realtek

Recent Posts

  • Postfix + Let’s Encrypt + BIND9 + DANE Fully Automated TLSA Update Guide
  • Postfix + Let’s Encrypt + BIND9 + DANE TLSA 指紋自動更新完整教學
  • Deploying DANE in Postfix
  • 如何在 Postfix 中部署 DANE
  • DANE: DNSSEC-Based TLS Protection

Recent Comments

  1. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on High Availability Architecture, Failover, GeoDNS, Monitoring, and Email Abuse Automation (SOAR)
  2. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on MariaDB + PostfixAdmin: The Core of Virtual Domain & Mailbox Management
  3. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Daily Operations, Monitoring, and Performance Tuning for an Enterprise Mail System
  4. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Final Chapter: Complete Troubleshooting Guide & Frequently Asked Questions (FAQ)
  5. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Network Architecture, DNS Configuration, TLS Design, and Postfix/Dovecot SNI Explained

Archives

  • December 2025
  • November 2025
  • October 2025

Categories

  • AI
  • Apache
  • Cybersecurity
  • Database
  • DNS
  • Docker
  • Fail2Ban
  • FileSystem
  • Firewall
  • Linux
  • LLM
  • Mail
  • N8N
  • OpenLdap
  • OPNsense
  • PHP
  • QoS
  • Samba
  • Switch
  • Virtualization
  • VPN
  • WordPress
© 2025 Nuface Blog | Powered by Superbs Personal Blog theme