Skip to content

Nuface Blog

้šจๆ„้šจๆ‰‹่จ˜ Casual Notes

Menu
  • Home
  • About
  • Services
  • Blog
  • Contact
  • Privacy Policy
  • Login
Menu

Proxmox + Ceph Integration and Distributed Storage Deployment

Posted on 2025-10-312025-10-31 by Rico

๐Ÿ”ฐ Introduction

In any virtualization infrastructure, storage reliability and scalability are critical for maintaining system stability.
As the number of servers grows and administrators require seamless live migration and high availability (HA),
traditional centralized storage (such as NAS or iSCSI) often becomes a performance bottleneck or single point of failure.

To address this, Proxmox VE natively integrates the Ceph distributed storage system,
enabling a self-healing, linearly scalable, and highly available storage backend across multiple nodes.

This article explains:
1๏ธโƒฃ The core architecture and principles of Ceph
2๏ธโƒฃ How to deploy Ceph within a Proxmox cluster
3๏ธโƒฃ Practical optimization and reliability strategies


๐Ÿงฉ 1. What Is Ceph?

1๏ธโƒฃ Definition

Ceph is an open-source, software-defined distributed storage system that provides:

  • Block storage (RBD) for VMs and containers
  • Object storage (RGW, compatible with S3 APIs)
  • File system storage (CephFS, a distributed file system)

Ceph ensures data durability by distributing data across multiple nodes using replication or erasure coding,
so even if hardware fails, your storage remains available and consistent.


๐Ÿงฑ Architecture Overview

           โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
           โ”‚          Proxmox VE          โ”‚
           โ”‚     (KVM / LXC / HA)         โ”‚
           โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚
                   Ceph Client (RBD)
                          โ”‚
          โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
          โ”‚            Ceph Cluster          โ”‚
          โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
          โ”‚ โ”‚   MONs   โ”‚   OSDs   โ”‚   MGRs โ”‚ โ”‚
          โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
          โ”‚        (Distributed Storage)     โ”‚
          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

2๏ธโƒฃ Core Components

ComponentFull NameFunction
MONMonitorMaintains cluster membership and health information.
OSDObject Storage DaemonStores actual data and handles replication.
MGRManagerProvides metrics, dashboards, and API services.
MDSMetadata ServerManages file system metadata (used for CephFS).

โš™๏ธ 2. Advantages of Proxmox + Ceph Integration

FeatureBenefit
Native IntegrationCeph is built directly into Proxmox VEโ€™s Web GUI and CLI tools.
No Single Point of FailureData is distributed across nodes rather than stored centrally.
Self-HealingAutomatically replicates and rebuilds data when disks or nodes fail.
Scalable ArchitectureAdd disks or nodes dynamically without downtime.
Live Migration SupportAll nodes share the same Ceph storage pool, enabling smooth VM migration.

๐Ÿงฐ 3. Example Deployment Architecture

Environment Overview

NodeIP AddressRole
pve-node0110.0.0.11Proxmox + Ceph MON + OSD
pve-node0210.0.0.12Proxmox + Ceph MON + OSD
pve-node0310.0.0.13Proxmox + Ceph MON + OSD

Recommendations:

  • At least three nodes are required for quorum.
  • Use two network interfaces per node:
    • One for management and VM traffic (Public Network)
    • One dedicated to Ceph replication (Cluster Network)

Network Design

Public Network  : 10.0.0.0/24   (Client I/O)
Cluster Network : 192.168.100.0/24 (Replication / Heartbeat)

๐Ÿ’ก Keep Ceph replication traffic separate from VM traffic to avoid latency or performance degradation.


๐Ÿงญ 4. Step-by-Step Deployment

Step 1 โ€“ Enable Ceph Repository

apt update
apt install ceph ceph-common ceph-fuse

Proxmox VE 9.x ships with Ceph Reef or newer Ceph Squid builds by default.


Step 2 โ€“ Initialize Ceph Cluster

In the Proxmox Web UI:

  1. Go to Datacenter โ†’ Ceph โ†’ Install Ceph
  2. After installation, click Create Cluster
  3. Define both Public and Cluster Networks

CLI method:

pveceph init --cluster-network 192.168.100.0/24 \
             --network 10.0.0.0/24

Step 3 โ€“ Add MON and MGR Nodes

pveceph mon create
pveceph mgr create

Repeat for all participating nodes or use Add Monitor in the Web UI.


Step 4 โ€“ Create OSDs (Object Storage Daemons)

Select available disks for Ceph storage:

pveceph osd create /dev/sdb

Or use the Web GUI: Ceph โ†’ OSD โ†’ Create

๐Ÿ’ก Use SSD or NVMe drives for Cephโ€™s DB/WAL partitions to enhance I/O performance.


Step 5 โ€“ Create a Storage Pool

pveceph pool create ceph-pool --size 3 --min-size 2

Then add it as a Proxmox storage backend:

pvesh create /storage --storage ceph-rbd \
--type rbd --pool ceph-pool \
--monhost 10.0.0.11,10.0.0.12,10.0.0.13

๐Ÿง  5. Performance and Reliability Design

SettingRecommended Configuration
Replica Count (size)3 replicas for production reliability
Min. Active Replicas (min-size)2 for safe I/O operations
Disk LayoutUse SSD/NVMe for DB/WAL; HDD for bulk data
Network DesignSeparate Ceph cluster and public networks
MonitoringUse Ceph Dashboard or ceph -s for real-time health checks

๐Ÿ—„๏ธ 6. Integration with HA and PBS

  • With High Availability (HA):
    All Proxmox nodes share the same Ceph storage pool, allowing seamless VM migration between nodes.
  • With Proxmox Backup Server (PBS):
    Ceph RBD pools can be mounted as a backend datastore for incremental backups.

Architecture Example:

[Proxmox Cluster]โ”€โ”€โ”€โ”
                     โ”‚  (Ceph RBD)
                     โ–ผ
              [Ceph Storage Pool]
                     โ”‚
             [Proxmox Backup Server]

โš™๏ธ 7. Cross-Site Replication and DR Extension

  • Use Ceph RBD Mirror for real-time block-level replication between data centers.
  • Combine with Proxmox Backup Server (PBS) for snapshot-based offsite backups.
  • For long-distance or high-latency environments, use Active/Passive replication with periodic synchronization.

โœ… Conclusion

By integrating Proxmox VE with Ceph, enterprises can build a fully distributed, self-healing, and scalable storage layer
that eliminates the single points of failure common in traditional architectures.

This unified design delivers:

  • Built-in high availability
  • Linear scalability
  • Automated recovery
  • Simplified storage management

Together, Proxmox and Ceph form a powerful open-source foundation for enterprise-grade virtualization and cloud infrastructure.

๐Ÿ’ฌ In the next article, weโ€™ll explore
โ€œProxmox Cloud Management and Hybrid Architecture Integrationโ€,
demonstrating how to extend your Proxmox environment into hybrid and multi-cloud deployments.

Recent Posts

  • Postfix + Letโ€™s Encrypt + BIND9 + DANE Fully Automated TLSA Update Guide
  • Postfix + Letโ€™s Encrypt + BIND9 + DANE TLSA ๆŒ‡็ด‹่‡ชๅ‹•ๆ›ดๆ–ฐๅฎŒๆ•ดๆ•™ๅญธ
  • Deploying DANE in Postfix
  • ๅฆ‚ไฝ•ๅœจ Postfix ไธญ้ƒจ็ฝฒ DANE
  • DANE: DNSSEC-Based TLS Protection

Recent Comments

  1. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on High Availability Architecture, Failover, GeoDNS, Monitoring, and Email Abuse Automation (SOAR)
  2. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on MariaDB + PostfixAdmin: The Core of Virtual Domain & Mailbox Management
  3. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Daily Operations, Monitoring, and Performance Tuning for an Enterprise Mail System
  4. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Final Chapter: Complete Troubleshooting Guide & Frequently Asked Questions (FAQ)
  5. Building a Complete Enterprise-Grade Mail System (Overview) - Nuface Blog on Network Architecture, DNS Configuration, TLS Design, and Postfix/Dovecot SNI Explained

Archives

  • December 2025
  • November 2025
  • October 2025

Categories

  • AI
  • Apache
  • Cybersecurity
  • Database
  • DNS
  • Docker
  • Fail2Ban
  • FileSystem
  • Firewall
  • Linux
  • LLM
  • Mail
  • N8N
  • OpenLdap
  • OPNsense
  • PHP
  • QoS
  • Samba
  • Switch
  • Virtualization
  • VPN
  • WordPress
© 2025 Nuface Blog | Powered by Superbs Personal Blog theme