🔰 Problem Scenario
When attempting to expand a ZFS pool, you might encounter an error like this:
root@pbs:~# zpool add pbsdata /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi5
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
The key part of this error is:
mismatched replication level: pool uses raidz and new vdev is disk
Meaning:
Your current pool (pbsdata) is built with a RAIDZ configuration (e.g., raidz1, raidz2, or raidz3),
but you attempted to add a single disk as a new vdev.
ZFS prevents this because it would create an inconsistent redundancy level — mixing RAIDZ and a standalone disk.
If you force the command with -f, the pool will technically expand,
but that single disk would have no redundancy, and if it fails, the entire pool becomes unusable.
⚠️ Therefore, never use
-funless you fully understand the risk.
🧩 Safe and Recommended Expansion Methods
Below are several safe and correct approaches to expand a ZFS pool, depending on your goal.
Option A – Add a New RAIDZ vdev (Recommended for Capacity Expansion)
If your pool currently uses a RAIDZ1 group of three disks,
you must add another identical group of three disks to expand the pool.
Example:
# Check current structure
zpool status pbsdata
# Add a new 3-disk RAIDZ1 vdev
zpool add pbsdata raidz1 \
/dev/disk/by-id/diskA \
/dev/disk/by-id/diskB \
/dev/disk/by-id/diskC
🧠 Note:
You cannot add disks to an existing RAIDZ vdev after creation.
The correct method is always to add an entire new vdev group to the pool.
Option B – Replace Existing Disks with Larger Ones (Gradual Expansion)
You can expand the pool by replacing each disk in a vdev with a larger one.
After all disks in that vdev are replaced and resilvering is complete,
the pool will automatically (or manually) expand to the new capacity.
Example:
# Ensure autoexpand is enabled
zpool set autoexpand=on pbsdata
# Replace each disk one by one
zpool replace pbsdata \
/dev/disk/by-id/old-diskX \
/dev/disk/by-id/new-diskX
# Monitor the resilver process
zpool status -v
# After all replacements are done, if not expanded automatically:
zpool online -e pbsdata /dev/disk/by-id/new-diskX
🕐 This method doesn’t require new slots but takes time for each resilver process to complete.
Option C – Create a Separate Pool and Add a New Datastore (Proxmox PBS)
If you’re using Proxmox Backup Server (PBS), another approach is to keep pools separate.
You can:
- Create a new pool (e.g.,
pbsdata2) on new disks - Add it as a new datastore inside PBS
Example:
zpool create pbsdata2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi5
Each datastore then operates independently —
a simple and safe method that also enables data segregation and flexible replication setups.
⚙️ Performance / Redundancy Enhancements (Non-Capacity Additions)
If your goal is not to increase capacity,
but to improve performance or add redundancy,
ZFS allows adding special vdev types that don’t affect storage size.
| Purpose | Command Example | Notes |
|---|---|---|
| Read Cache (L2ARC) | zpool add pbsdata cache /dev/disk/by-id/yourSSD | Speeds up read operations |
| Write Log (SLOG/ZIL) | zpool add pbsdata log /dev/disk/by-id/yourEnterpriseSSD | Use SSDs with power-loss protection |
| Hot Spare Disk | zpool add pbsdata spare /dev/disk/by-id/yourSpareDisk | Automatically replaces failed drives |
These vdevs can be safely removed and do not affect data redundancy,
but they do not increase pool capacity.
🧾 Pre-Check Before Expansion
Always review your current configuration carefully:
# Check current pool layout and vdev types
zpool status pbsdata
# Use /dev/disk/by-id/ for stable naming
ls -l /dev/disk/by-id/
# Check pool and dataset usage
zpool list
zfs list
⚠️ Why You Should Avoid -f to Force Add a Single Disk
Forcing a single disk into a RAIDZ pool using -f creates a mixed topology:
- One vdev with RAIDZ redundancy
- One vdev as a single, non-redundant disk
If that single disk fails, the entire pool fails,
because ZFS pools depend on the integrity of all vdevs.
❌ One vdev failure = entire pool failure.
✅ Only add properly redundant vdevs or replace existing drives safely.
✅ Summary
| Goal | Recommended Method |
|---|---|
| Increase capacity | Add a new RAIDZ vdev (same layout as original) |
| Gradual capacity expansion | Replace all disks with larger ones |
| Separate datastore (PBS) | Create a new pool (e.g., pbsdata2) |
| Improve performance | Add L2ARC / SLOG SSDs |
| Add redundancy | Add spare drives |
In short:
Expanding a ZFS pool is not about adding random disks —
it’s about maintaining redundancy consistency and data integrity.
Always expand by adding full vdev groups or replacing disks properly.