VMware storage migration to Kubernetes is the process of replacing vSAN storage policies, VMFS datastores, and vSphere-native volume management with Kubernetes-native constructs: CSI drivers, StorageClasses, PersistentVolumeClaims, and VolumeSnapshots. The migration touches more than just data movement — it requires rethinking how storage policy, performance tiers, and lifecycle operations are expressed and enforced.
The motivation for this migration has accelerated since VMware’s acquisition by Broadcom changed licensing and support terms. Infrastructure teams running vSAN-backed workloads on vSphere are evaluating Kubernetes-native alternatives that provide equivalent storage policy management without proprietary lock-in. Understanding the conceptual mapping between VMware and Kubernetes storage primitives is the foundation for a successful migration.
VMware vs. Kubernetes Storage Concepts
VMware storage concepts do not have direct one-to-one equivalents in Kubernetes. Teams that attempt a literal port often reproduce the same architectural constraints that made VMware expensive to scale. The table below shows the conceptual mapping:
| VMware concept | Kubernetes equivalent | Key difference |
|---|---|---|
| vSAN storage policy | StorageClass parameters | Kubernetes StorageClass is vendor-neutral; policy enforced by CSI driver |
| VMFS datastore | PersistentVolume (PV) | PVs are per-workload; no shared datastore namespace |
| vSphere volume snapshot | CSI VolumeSnapshot | CSI snapshots are portable across clusters if the backend supports it |
| vSAN encryption policy | StorageClass encryption parameter | Kubernetes encryption is per-StorageClass or at-rest via CSI driver |
| vSphere HA + DRS | Kubernetes scheduler + node affinity | Kubernetes schedules pods, not VMs; storage attach is separate from scheduling |
Migration Phases
A structured VMware storage migration to Kubernetes typically follows four phases:
Phase 1 — Policy mapping: Inventory existing vSAN storage policies (replication factor, caching tier, encryption, performance class) and map each to a Kubernetes StorageClass. Create StorageClasses in the target cluster and validate that the CSI driver enforces the equivalent behavior.
Phase 2 — Application containerization or replatforming: Decide which workloads will be containerized (repackaged as pods), which will run in KubeVirt VMs, and which will be lifted without modification. This phase determines whether data needs to be migrated to a new volume format or can be mounted directly.
Phase 3 — Data migration: Move data from VMware datastores to Kubernetes PVCs. Common techniques include snapshot-based copy (snapshot the VMware volume, restore to a Kubernetes PVC), live rsync/copy jobs, and storage vendor replication utilities. Validate checksums and application consistency before cutting over.
Phase 4 — Validation and cutover: Run performance benchmarks (fio, application-level throughput tests) against the new storage to confirm the Kubernetes StorageClass delivers equivalent or better characteristics. Cut over DNS, update connection strings, and decommission the VMware infrastructure.
The HCI Pitfall: Avoid Replicating the vSAN Model
Teams migrating from vSAN sometimes reach for a hyperconverged Kubernetes distribution that co-locates storage on compute nodes — effectively rebuilding the same HCI model they had with vSAN, but using Kubernetes instead of vSphere. This reproduces the core limitation of HCI: storage and compute must scale together, even when only one resource is the bottleneck.
Replacing vSAN with software-defined storage on a disaggregated architecture avoids this. Storage nodes scale independently from compute nodes. When a Kafka cluster needs more log storage, administrators add storage capacity without adding CPU or memory to the compute pool. This separation is especially valuable for stateful applications with asymmetric resource growth patterns.
🚀 Replace vSAN with disaggregated NVMe/TCP storage on Kubernetes Simplyblock maps vSAN storage policies to CSI StorageClasses and delivers NVMe performance without the HCI scaling constraint. 👉 VMware migration to Kubernetes with simplyblock
KubeVirt as an Incremental Migration Path
Not every VMware workload can be containerized immediately. KubeVirt and Kubernetes virtualization provides a path to run VMs inside Kubernetes pods, allowing teams to migrate storage infrastructure while the application stack remains unchanged. VMs running in KubeVirt use the same CSI PVCs as containerized workloads, so the storage layer migrates to Kubernetes even if the application has not been rewritten.
This incremental approach is practical for large estates: migrate storage and the Kubernetes platform first, then containerize applications over multiple sprints. KubeVirt VMs and containerized workloads can share the same storage cluster, managed via the same CSI StorageClasses.
VMware Storage Migration with Simplyblock
Simplyblock provides the software-defined block storage layer for teams migrating off vSAN or VMFS datastores. Key capabilities relevant to VMware migration:
- StorageClass mapping to QoS tiers: simplyblock StorageClasses can express IOPS limits, throughput caps, and replication factor — the same policy dimensions as vSAN, without vSphere dependency.
- NVMe/TCP and NVMe/RoCE transport: replace iSCSI or Fibre Channel datastores with NVMe-native protocols that deliver lower latency and higher throughput on standard Ethernet or RDMA fabrics.
- Disaggregated storage architecture: storage nodes scale independently, avoiding the HCI scaling trap that replicating vSAN’s architecture creates.
- CSI-native lifecycle operations: VolumeSnapshots, PVC expansion, and topology-aware provisioning are all available through standard Kubernetes APIs, no vSphere APIs required.
Related Terms
These glossary entries cover the architectural concepts and destination primitives for a VMware storage migration.
- What Is vSAN
- Replacing vSAN with Software-Defined Storage
- Hyperconverged vs. Disaggregated Storage
- KubeVirt and Kubernetes Virtualization
- Software-Defined Block Storage
Questions and Answers
How do I migrate VMware storage to Kubernetes?
The migration follows four phases: map vSAN storage policies to Kubernetes StorageClasses, decide which workloads to containerize or run in KubeVirt, migrate data from VMware datastores to Kubernetes PVCs using snapshots or copy jobs, then benchmark and cut over. The key is treating the migration as a storage architecture redesign, not just a data copy — Kubernetes CSI primitives work differently from vSphere APIs and require a deliberate mapping exercise.
What replaces vSAN policies in Kubernetes?
Kubernetes StorageClasses replace vSAN storage policies. A StorageClass encodes the parameters that a CSI driver uses to provision volumes — replication factor, performance tier, encryption, topology constraints. The specific parameters depend on the CSI driver. Simplyblock’s CSI driver supports per-StorageClass IOPS limits, throughput caps, replication, and thin provisioning, which covers the most common vSAN policy dimensions.
Can I run VMware workloads during a Kubernetes migration?
Yes. KubeVirt allows VMs to run inside Kubernetes as pods, using CSI PVCs for storage. This means you can migrate to Kubernetes-native storage while leaving the VM guest OS and application unchanged. KubeVirt and containerized workloads can share the same storage cluster. This incremental approach is practical for large estates where full containerization is a multi-quarter effort.
What is the storage architecture difference between vSAN and Kubernetes CSI?
vSAN is a hyperconverged storage system tightly coupled to vSphere. Storage capacity lives on the same nodes as compute, managed through vSphere APIs and storage policies. Kubernetes CSI is a vendor-neutral interface: any storage vendor can implement a CSI driver, and workloads request storage through StorageClasses and PVCs without knowing the underlying system. Disaggregated CSI-backed storage separates storage nodes from compute nodes, allowing independent scaling — a fundamental architectural difference from vSAN’s coupled model.