Skip to content

Nuface Blog

隨意隨手記 Casual Notes

Menu
  • Home
  • About
  • Services
  • Blog
  • Contact
  • Privacy Policy
  • Login
Menu

ZFS Pool Expansion Procedures

Posted on 2025-11-032025-11-03 by Rico

🔰 Problem Scenario

When attempting to expand a ZFS pool, you might encounter an error like this:

root@pbs:~# zpool add pbsdata /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi5
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

The key part of this error is:

mismatched replication level: pool uses raidz and new vdev is disk

Meaning:

Your current pool (pbsdata) is built with a RAIDZ configuration (e.g., raidz1, raidz2, or raidz3),
but you attempted to add a single disk as a new vdev.

ZFS prevents this because it would create an inconsistent redundancy level — mixing RAIDZ and a standalone disk.
If you force the command with -f, the pool will technically expand,
but that single disk would have no redundancy, and if it fails, the entire pool becomes unusable.

⚠️ Therefore, never use -f unless you fully understand the risk.


🧩 Safe and Recommended Expansion Methods

Below are several safe and correct approaches to expand a ZFS pool, depending on your goal.


Option A – Add a New RAIDZ vdev (Recommended for Capacity Expansion)

If your pool currently uses a RAIDZ1 group of three disks,
you must add another identical group of three disks to expand the pool.

Example:

# Check current structure
zpool status pbsdata

# Add a new 3-disk RAIDZ1 vdev
zpool add pbsdata raidz1 \
  /dev/disk/by-id/diskA \
  /dev/disk/by-id/diskB \
  /dev/disk/by-id/diskC

🧠 Note:
You cannot add disks to an existing RAIDZ vdev after creation.
The correct method is always to add an entire new vdev group to the pool.


Option B – Replace Existing Disks with Larger Ones (Gradual Expansion)

You can expand the pool by replacing each disk in a vdev with a larger one.
After all disks in that vdev are replaced and resilvering is complete,
the pool will automatically (or manually) expand to the new capacity.

Example:

# Ensure autoexpand is enabled
zpool set autoexpand=on pbsdata

# Replace each disk one by one
zpool replace pbsdata \
  /dev/disk/by-id/old-diskX \
  /dev/disk/by-id/new-diskX

# Monitor the resilver process
zpool status -v

# After all replacements are done, if not expanded automatically:
zpool online -e pbsdata /dev/disk/by-id/new-diskX

🕐 This method doesn’t require new slots but takes time for each resilver process to complete.


Option C – Create a Separate Pool and Add a New Datastore (Proxmox PBS)

If you’re using Proxmox Backup Server (PBS), another approach is to keep pools separate.

You can:

  1. Create a new pool (e.g., pbsdata2) on new disks
  2. Add it as a new datastore inside PBS

Example:

zpool create pbsdata2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi5

Each datastore then operates independently —
a simple and safe method that also enables data segregation and flexible replication setups.


⚙️ Performance / Redundancy Enhancements (Non-Capacity Additions)

If your goal is not to increase capacity,
but to improve performance or add redundancy,
ZFS allows adding special vdev types that don’t affect storage size.

PurposeCommand ExampleNotes
Read Cache (L2ARC)zpool add pbsdata cache /dev/disk/by-id/yourSSDSpeeds up read operations
Write Log (SLOG/ZIL)zpool add pbsdata log /dev/disk/by-id/yourEnterpriseSSDUse SSDs with power-loss protection
Hot Spare Diskzpool add pbsdata spare /dev/disk/by-id/yourSpareDiskAutomatically replaces failed drives

These vdevs can be safely removed and do not affect data redundancy,
but they do not increase pool capacity.


🧾 Pre-Check Before Expansion

Always review your current configuration carefully:

# Check current pool layout and vdev types
zpool status pbsdata

# Use /dev/disk/by-id/ for stable naming
ls -l /dev/disk/by-id/

# Check pool and dataset usage
zpool list
zfs list

⚠️ Why You Should Avoid -f to Force Add a Single Disk

Forcing a single disk into a RAIDZ pool using -f creates a mixed topology:

  • One vdev with RAIDZ redundancy
  • One vdev as a single, non-redundant disk

If that single disk fails, the entire pool fails,
because ZFS pools depend on the integrity of all vdevs.

❌ One vdev failure = entire pool failure.
✅ Only add properly redundant vdevs or replace existing drives safely.


✅ Summary

GoalRecommended Method
Increase capacityAdd a new RAIDZ vdev (same layout as original)
Gradual capacity expansionReplace all disks with larger ones
Separate datastore (PBS)Create a new pool (e.g., pbsdata2)
Improve performanceAdd L2ARC / SLOG SSDs
Add redundancyAdd spare drives

In short:
Expanding a ZFS pool is not about adding random disks —
it’s about maintaining redundancy consistency and data integrity.
Always expand by adding full vdev groups or replacing disks properly.

Recent Posts

  • Postfix + Let’s Encrypt + BIND9 + DANE Fully Automated TLSA Update Guide
  • Postfix + Let’s Encrypt + BIND9 + DANE TLSA 指紋自動更新完整教學
  • Deploying DANE in Postfix
  • 如何在 Postfix 中部署 DANE
  • DANE: DNSSEC-Based TLS Protection

Recent Comments

  1. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on High Availability Architecture, Failover, GeoDNS, Monitoring, and Email Abuse Automation (SOAR)
  2. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on MariaDB + PostfixAdmin: The Core of Virtual Domain & Mailbox Management
  3. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Daily Operations, Monitoring, and Performance Tuning for an Enterprise Mail System
  4. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Final Chapter: Complete Troubleshooting Guide & Frequently Asked Questions (FAQ)
  5. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Network Architecture, DNS Configuration, TLS Design, and Postfix/Dovecot SNI Explained

Archives

  • December 2025
  • November 2025
  • October 2025

Categories

  • AI
  • Apache
  • Cybersecurity
  • Database
  • DNS
  • Docker
  • Fail2Ban
  • FileSystem
  • Firewall
  • Linux
  • LLM
  • Mail
  • N8N
  • OpenLdap
  • OPNsense
  • PHP
  • QoS
  • Samba
  • Switch
  • Virtualization
  • VPN
  • WordPress
© 2025 Nuface Blog | Powered by Superbs Personal Blog theme