✅ Network card performance directly affects the transfer speed and stability of external storage.
This is one of the most critical factors when evaluating the use of external NAS storage with Proxmox VE.
Below, I’ll break down the reasons, bottlenecks, recommended architecture, and hardware selection for you in detail.
🧭 1. Why Does NIC Performance Affect NAS Access Speed?
When Proxmox VE mounts external NAS storage via NFS / iSCSI / SMB,
all VM or container disk I/O (read/write) operations are transmitted over the network.
In other words:
VM disk I/O = network packets
Therefore, network bandwidth and latency define the upper limit and stability of disk performance.
⚙️ 2. Real-World Bottleneck Analysis
| Network Interface | Theoretical Bandwidth | Actual Usable Speed (after protocol overhead) | Approximate Disk Performance |
|---|---|---|---|
| 1 GbE | 125 MB/s | 100–110 MB/s | ≈ SATA SSD single-drive max |
| 2.5 GbE | 312 MB/s | 270–290 MB/s | Slightly better than SATA SSD, good for small/medium VMs |
| 10 GbE | 1250 MB/s | 900–1100 MB/s | Close to NVMe SSD performance, supports multiple VMs |
| 40 GbE / 100 GbE | Enterprise-class | Architecture-dependent | Data center level |
⚠️ If multiple VMs are simultaneously reading/writing to the external NAS, 1 GbE will almost certainly become the bottleneck.
🧩 3. Key Performance Parameters and Recommendations
| Parameter | Impact | Recommended Practice |
|---|---|---|
| Bandwidth (Gbps) | Defines max throughput | At least 2.5 GbE; use 10 GbE for many VMs or database workloads |
| Latency | Affects IOPS (small-file performance) | Use direct connections or fiber, avoid multi-hop routing |
| Switch Performance | Low-end switches increase jitter | Use managed switches supporting Jumbo Frame |
| NAS Drive Type | HDD vs. SSD | SSD/Hybrid Cache significantly boosts speed |
| Protocol Choice | NFS vs. iSCSI vs. SMB | iSCSI = highest performance; NFS = balanced; SMB = slowest |
| NIC Driver | Intel / Broadcom / Realtek | Intel is most stable; Realtek often has higher latency and lower throughput |
| Link Aggregation (LACP) | Increases bandwidth and redundancy | Ideal for multi-VM NAS access |
🧮 4. Recommended Network Architectures (PVE + NAS)
1️⃣ Small-to-Medium Business Setup
| Component | Recommended Specification |
|---|---|
| PVE Host | At least 2 NICs: 1. Management / External Network 2. Dedicated Storage Network (NFS / iSCSI) |
| NAS | Supports 2.5G / 10G NIC (PCIe expansion if needed) |
| Switch | Managed switch supporting VLAN and Jumbo Frame (MTU 9000) |
| Protocol | Prefer NFS v3/v4; use iSCSI for higher performance |
Example topology:
[VM Network] ─ 1 GbE ─▶ [Switch]
[PVE ↔ NAS Storage Network] ─ 10 GbE ─▶ [Switch]
2️⃣ Large Environments (High-Availability Cluster)
| Component | Configuration |
|---|---|
| PVE Cluster (3+ nodes) | Each node has: • Management NIC (1 GbE) • Cluster Communication NIC (1 GbE) • Storage NIC (10 GbE) |
| NAS | Dual 10 GbE NICs with Bonding (LACP) |
| Switch | Supports VLAN + LACP + Jumbo Frame |
⚡ 5. Performance Optimization Tips (Practical Experience)
| Optimization | Recommended Setting | Effect |
|---|---|---|
| Enable Jumbo Frame (MTU 9000) | Configure on PVE / NAS / Switch | Reduces CPU load, increases throughput |
| NFS Mount Options | rsize=1048576,wsize=1048576,nfsvers=3,tcp | Improves transfer efficiency |
| iSCSI Multipath | Install multipath-tools on PVE | Enhances fault tolerance and load balancing |
| Dedicated VLAN for Storage | Separate storage from general traffic | Reduces packet interference |
| NAS SSD Cache / NVMe Pool | Hot data caching | Greatly improves IOPS |
| NIC Selection | Intel X550 / Mellanox ConnectX series | Stable drivers, low latency |
🧠 6. Practical Hardware Recommendations (Synology + PVE Example)
| Use Case | Recommended Setup | Description |
|---|---|---|
| General SMBs | 2.5 GbE + NFS | Low cost, quick setup |
| Multiple VMs / Heavy Data | 10 GbE NFS or iSCSI | Stable performance, low latency |
| High-Availability Cluster | 10 GbE iSCSI + LACP + Multipath | Enterprise-grade architecture |
| Backup Only | 1 GbE SMB/NFS | Performance not critical |
✅ Final Recommendations
To ensure stable and high-speed performance when using external NAS with Proxmox VE:
- Use at least 2.5 GbE or higher NICs
- Separate storage and regular traffic (dedicated VLAN)
- Enable Jumbo Frame (MTU 9000) on Switch and NAS
- Choose NFS for simplicity, iSCSI for performance
- Use Intel or Mellanox NICs — avoid Realtek