This guide demonstrates how to integrate:
- PVE1 — Proxmox VE node #1
- PVE2 — Proxmox VE node #2
- PBS1 — Proxmox Backup Server
- Datastore —
mainstore
The goal is:
- Centralized backup
- Restore VMs to any node
- Move (migrate) VMs between nodes without clustering
- Keep each PVE fully independent
- Avoid quorum, split-brain, or cluster complexity
1. Architecture Overview
[PVE1] ----+
|===> [PBS1 - Proxmox Backup Server] → mainstore
[PVE2] ----+
Both PVE nodes back up to the same PBS.
A VM backed up on PVE1 can be restored directly on PVE2.
2. Install PBS1
Install PBS on a physical host or on a VM.
Access the web interface:
https://PBS1:8007
3. Create a Datastore
PBS Web UI → Datastore → Add
- Name:
mainstore - Path:
/mnt/mainstore
PBS will store all VM backups here.
4. Create an API Token for PVE
PBS UI → Access → API Tokens
Create a token:
- User:
root@pam - Token ID:
pve-token
Copy the token Secret.
5. Assign Permissions (Required)
PBS UI → Access → Permissions → Add
Path: /datastore/mainstore
User/Token: root@pam!pve-token
Role: DatastoreBackup
Propagate: Yes
Without this, PVE will get:
401 permission denied
6. Obtain PBS Fingerprint (TLS verification)
PBS UI → Datastore → Summary → Show Connection Information
Copy the TLS fingerprint.
This must be entered into PVE when adding the PBS storage.
7. Add PBS Storage on PVE1 and PVE2
PVE UI → Datacenter → Storage → Add → Proxmox Backup Server
Fill in:
Server: PBS1
Datastore: mainstore
Username: root@pam!pve-token
Secret: (token secret)
Fingerprint: (from Show Connection Information)
Content: Backup
Now both PVEs can back up to PBS1.
8. Create Backup Jobs
PVE → Datacenter → Backup → Add
Recommended settings:
- Mode: Snapshot
- Compression: zstd
- Schedule: daily or weekly
- Storage: mainstore
PBS will store versioned backups.
9. Cross-Node Restore (PVE1 → PVE2)
On PVE2:
- Go to a VM → Restore
- Select:
- Storage:
mainstore - Backup file (from PVE1)
- Target storage (e.g., data-zfs)
- Storage:
Click Restore, and the VM will boot on PVE2.
10. ⚠️ Important: Configurations Must Match Between PVE1 & PVE2
To ensure restored VMs work properly, the two PVE environments must match.
10.1 Network interface names must match
Example:
enp6s0
vmbr0
vmbr10
If PVE2 does not have the same names, the VM cannot attach its NIC.
10.2 VLAN configuration must match
Both PVEs must have identical:
- Bridges
- VLAN aware settings
- Trunked VLANs
10.3 Storage IDs must match
Example:
PVE1:
local-lvm
data-zfs
PVE2:
local-lvm
data-zfs
Otherwise:
storage 'data-zfs' does not exist
10.4 BIOS / Machine Type / CPU Type (recommended)
Prevents CPU-incompatibility boot errors.
11. Summary — Best Way to Combine Two PVE Nodes
| Feature | Supported |
|---|---|
| Centralized backup | ✔ PBS |
| Cross-node restore | ✔ |
| Migration without cluster | ✔ |
| Independent nodes | ✔ |
| Avoid quorum problems | ✔ |
| True HA | ✘ (cluster required) |
This method provides the safest, simplest, and most stable way to integrate two PVE nodes without the risks of clustering.