Skip to main content

Rob Pankow Rob Pankow

Private Cloud Storage for Proxmox and Kubernetes

Mar 20, 2026  |  8 min read

Last edited: Mar 29, 2026

Private Cloud Storage for Proxmox and Kubernetes

Enterprise infrastructure teams are navigating a practical reality: they are running two modernization tracks at once. Kubernetes is expanding to host cloud-native and stateful workloads, while Proxmox is absorbing VM estates from VMware as organizations respond to licensing and pricing changes following the Broadcom acquisition. The combination is logical — both are open-source, both are production-ready, and together they cover most of the runtime surface a private cloud needs to serve. The problem that surfaces quickly is storage.

Why Proxmox and Kubernetes Are Emerging Together

VMware’s traditional strength was as a single control plane for virtualized infrastructure. When enterprises move away from it, they face a choice: find a single replacement or accept a split-runtime model. Many are landing on Proxmox for VMs and Kubernetes for containers, and there are good reasons for this. Proxmox handles legacy and monolithic applications that are not ready to be containerized. Kubernetes handles modern, cloud-native services that benefit from orchestration, autoscaling, and GitOps-driven operations. The challenge is that these two platforms were not designed with a common storage layer in mind.

Proxmox storage solutions — Ceph, LVM, NFS, ZFS — work well in Proxmox-native deployments but do not map cleanly to the CSI model that Kubernetes expects. Kubernetes relies on CSI drivers to provision, attach, and manage volumes dynamically, using StorageClass abstractions that have no equivalent in a Proxmox-native storage model. If teams do not address this gap early, they end up with separate storage pools, separate tooling, and separate operational models for environments that often serve the same business services.

The Storage Silo Problem

Running Proxmox and Kubernetes on separate storage foundations creates compounding operational problems over time.

Different protection policies create inconsistent recovery guarantees. A database running in a Proxmox VM may be snapshotted on one schedule with one retention window, while the same database migrated to a Kubernetes StatefulSet uses an entirely different snapshot mechanism with different granularity. When an incident requires cross-environment recovery, teams are reconciling two incompatible data protection models under pressure.

Capacity planning becomes fragmented. Teams end up with two sets of storage pools that cannot efficiently share headroom. When the Proxmox pool is under pressure, idle capacity in the Kubernetes pool cannot help, and vice versa. This inflates the total storage footprint relative to what a unified pool would require.

Observability gaps emerge. Storage health, IOPS, latency, and capacity metrics come from different systems with different formats. Building a coherent view of storage health across both environments requires significant integration work that most platform teams do not prioritize until it becomes a problem.

Designing a Unified Block Storage Foundation

The architecture that solves this problem treats storage as a shared fabric that exposes different interfaces depending on the consumer — iSCSI or NVMe/TCP for Proxmox VMs, and a CSI driver with dynamic provisioning for Kubernetes workloads.

From the storage system’s perspective, the underlying pool is the same: a distributed block storage layer that manages capacity, replication, and erasure coding across physical nodes. Proxmox attaches VM disks as iSCSI or NVMe/TCP block devices, which Proxmox manages using its standard disk interface. Kubernetes workloads consume the same pool through a CSI driver, which handles PVC provisioning, attachment, and lifecycle management.

This design gives teams one capacity pool to manage and one set of protection policies to enforce. Snapshots, retention windows, and RPO targets are configured once at the storage level and apply consistently regardless of whether the workload is a VM or a container. When a database needs to be migrated from Proxmox to Kubernetes, storage policy travels with the workload rather than having to be rebuilt from scratch in a new system.

NVMe over TCP is the recommended transport for this architecture. It delivers near-local NVMe performance over standard Ethernet infrastructure without specialized networking hardware, which means both Proxmox hosts and Kubernetes nodes can consume storage over the same network fabric they already use for everything else. There is no Fibre Channel infrastructure to manage, no InfiniBand requirement, and no proprietary SAN hardware in the path.

Hardware Efficiency Through Storage Disaggregation

Disaggregated storage — where compute and storage are separate pools that can scale independently — is a significant efficiency gain in mixed Proxmox and Kubernetes environments.

In a hyper-converged model, adding capacity means adding full nodes, including compute resources the team may not need. In a disaggregated design, storage nodes can be scaled independently to meet growing capacity demands without over-provisioning compute. The same storage pool serves both Proxmox hosts and Kubernetes nodes, so headroom in the pool is shared across both environments. Teams can right-size each dimension independently as the platform grows.

This also improves hardware refresh economics. When a generation of Proxmox hosts reaches end of life, storage nodes are not necessarily on the same refresh cycle. The storage layer continues serving both environments through hardware transitions without a forced synchronized upgrade.

Day-2 Operations Advantages

The operational benefits of a unified storage foundation compound over time. Platform teams that manage one storage system instead of two develop deeper expertise and faster incident response. When a volume is slow, the investigation path is the same regardless of whether the consumer is a VM or a Kubernetes pod. When capacity needs to be added, there is one procedure and one pool to extend.

Incident response across mixed environments improves materially. During a storage-affecting event, teams do not need to coordinate between a Proxmox storage team and a Kubernetes storage team. Runbooks, escalation paths, and tooling are unified. This matters most under pressure — when an incident is affecting production services that run across both environments, response speed is directly proportional to operational clarity.

Compliance and governance controls become easier to enforce. Data protection policies — retention, encryption at rest, access controls — apply uniformly across both runtime types. Audit evidence is collected from one system. When a compliance review requires demonstrating that all production data meets a specific RPO target, that evidence comes from one source rather than requiring a reconciliation between two separate storage systems.

Migration Sequencing: Start Storage Consolidation Early

The most common mistake in Proxmox and Kubernetes modernization programs is treating storage consolidation as a phase-two activity. Teams stand up both platforms on temporary storage solutions — local disks, NFS shares, Ceph clusters that are not sized for the long term — and plan to consolidate later. Later usually never arrives, or it arrives after the temporary solutions have become deeply embedded in operational practice.

Temporary storage solutions become permanent operational burden. Service-level dependencies form around their specific behaviors. Snapshot schedules, mount paths, and access patterns get baked into runbooks and automation. Migrating off them later requires coordinating data migrations with production service continuity, which is significantly harder than designing the unified foundation before workloads are running.

The right sequence is to define the storage architecture before either platform is in production, size the shared pool for the expected combined workload, and bring both Proxmox and Kubernetes online using the unified foundation from day one. This requires more upfront design work but eliminates a class of future migration risk that consistently proves expensive.

Where simplyblock Fits

simplyblock is a software-defined storage platform built on NVMe over TCP that is designed for exactly this kind of mixed-runtime private cloud environment. It exposes block storage to Proxmox VMs via standard protocols while providing a CSI driver for Kubernetes with full support for dynamic provisioning, instant snapshots, thin provisioning, and encryption at rest. One storage pool, one management interface, one set of protection policies — shared across both environments.

For teams building a post-VMware private cloud on Proxmox and Kubernetes, simplyblock provides the shared storage foundation that makes the combined platform operationally coherent rather than two separate systems that happen to run in the same data center.

Questions and Answers

Why does unified storage policy matter when running both Proxmox and Kubernetes?

Because business services often span both runtime types, and inconsistent protection policies create recovery risk. If a database in a Proxmox VM has a different snapshot schedule and retention window than the same database migrated to a Kubernetes StatefulSet, teams face a data reconciliation problem during incidents. Unified storage policy means RPO, RTO, and retention targets are defined once and applied consistently across both environments.

What protocol should Proxmox use to attach shared storage in this architecture?

NVMe over TCP or iSCSI over standard Ethernet are both viable. NVMe/TCP is preferred where performance matters because it delivers near-local NVMe latency and IOPS without specialized network hardware. iSCSI is a reasonable choice for environments where existing tooling is already iSCSI-native. Both can coexist with CSI-provisioned Kubernetes volumes drawing from the same underlying storage pool.

When in a modernization program should storage consolidation happen?

Storage consolidation should happen before both platforms are running production workloads — ideally as a first-phase design decision, not a follow-on activity. Teams that defer storage consolidation consistently find that temporary solutions become permanent operational debt. The migration cost of moving live workloads off embedded temporary storage is significantly higher than designing the shared foundation upfront.

Is a disaggregated storage model more expensive than hyper-converged?

Not over a realistic operational horizon. Hyper-converged models appear simpler initially, but they force compute and storage to scale together, which often leads to over-provisioning one dimension to meet the other’s requirements. Disaggregated storage lets teams scale compute and storage independently, matching actual growth patterns in each dimension. Hardware refresh cycles are also decoupled, which reduces the cost of generational transitions on either side.

You may also like:

Simplyblock Replaces Your VMware and Database Architecture
Simplyblock Replaces Your VMware and Database Architecture

The VMware + database stack was never designed for modern workloads. Here's how simplyblock and PostgreSQL replace it with a decoupled, API-driven, Kubernetes-native data architecture.

Kubernetes Storage Without the Pain - Simplyblock in 15 Minutes
Kubernetes Storage Without the Pain - Simplyblock in 15 Minutes

Whether you're building a high-performance cloud-native app or running data-heavy workloads in your own infrastructure, persistent storage is necessary. In Kubernetes, this means having storage that…

Choosing the Right Kubernetes Storage Solution for Your Workloads
Choosing the Right Kubernetes Storage Solution for Your Workloads

TLDR: Choosing the right Kubernetes Storage isn’t easy. As a guideline for the selection, make sure you have the best of hyper-converged (co-located) and disaggregated setups. Also, make sure that…