LVM, or Logical Volume Manager, is a storage abstraction layer built into the Linux kernel that allows administrators to pool multiple physical disks into a single Volume Group and carve it into resizable Logical Volumes. Rather than working with fixed disk partitions, LVM lets you resize, snapshot, stripe, and migrate storage online — making it a foundational tool for Linux server storage management.
For most single-node and on-premises environments, LVM has been the default answer for flexible disk management for over two decades. Its limits become more visible when teams move to distributed, containerized, or multi-node architectures where per-node volume management no longer scales.
How LVM Works
LVM operates in three layers of abstraction:
- Physical Volumes (PVs) — Raw disks or partitions initialized for LVM use (
pvcreate /dev/sda). Each PV is divided into fixed-size Physical Extents (PEs), typically 4 MB each. - Volume Groups (VGs) — One or more PVs pooled together into a single addressable storage pool (
vgcreate vg_main /dev/sda /dev/sdb). The VG presents a unified pool of PEs to the layer above. - Logical Volumes (LVs) — Flexible block devices carved from the VG pool (
lvcreate -L 300G -n lv_db vg_main). Each LV maps to a set of PEs and appears to the OS as a standard block device — ready for a filesystem, swap, or raw use.
This layered model decouples the physical disk topology from what the OS and applications see, enabling online resize, live migration between PVs, and snapshot creation without downtime.
🚀 Running stateful workloads that need more than LVM can offer? simplyblock delivers thin provisioning, instant snapshots, and multi-tenant QoS across nodes — natively integrated with Kubernetes via CSI, without per-node volume management overhead. 👉 Explore Kubernetes-Native NVMe Storage →
LVM Features and Capabilities
LVM ships with a feature set that covers most single-node storage needs:
- Online resize: Grow or shrink LVs live (
lvresize) without unmounting (filesystem support required). - Thin provisioning: LVM thin pools (
lvcreate --thinpool) allocate space on write rather than at creation, allowing over-provisioned volume sets within a pool. - Snapshots: Copy-on-write snapshots of any LV (
lvcreate -s), useful for backups and pre-upgrade recovery points. - Striping: Stripe writes across multiple PVs (
-iflag) to increase throughput. - Mirroring and RAID: LVM RAID (
--type raid1,raid5, etc.) replicates data across PVs for local redundancy. - Cache volumes:
lvmcachelets a fast SSD act as a read/write cache in front of a slower HDD-backed LV.
LVM vs. Traditional Partitioning vs. Modern Storage Platforms
LVM solves the rigidity of fixed disk partitions but was designed for single-node, administrator-managed environments. Modern platforms have extended or replaced this model in different directions.
| Feature | LVM | Traditional Partitions | Ceph / Rook | simplyblock |
|---|---|---|---|---|
| Online resize | Yes | No | Yes | Yes |
| Thin provisioning | Yes (lvmthin) | No | Yes | Yes |
| Snapshots | Yes (CoW) | No | Yes (RBD) | Yes (instant) |
| Multi-node / network | No | No | Yes | Yes (NVMe/TCP) |
| Kubernetes CSI | Via TopoLVM | No | Yes | Yes (native) |
| Operational complexity | Low (single node) | Very low | High | Low |
LVM in Kubernetes: TopoLVM and Its Limits
LVM is not natively Kubernetes-aware, but the TopoLVM project provides a CSI driver that surfaces LVM Logical Volumes as Kubernetes PersistentVolumes. Each PV is backed by an LV on the local node where the pod is scheduled.
This approach works for workloads that can tolerate node-local storage — such as local caches or single-node databases. Its constraints become significant at scale:
- Node affinity lock-in: Pods are permanently bound to the node where their LV lives. If the node fails, the PV is inaccessible.
- No cross-node replication: LVM has no native mechanism to replicate data to other nodes. Mirroring is local-only.
- Manual VG management: Each node’s Volume Group must be configured and maintained separately. There is no centralized storage pool.
- Capacity limits: Storage is bounded by what is physically attached to a single node.
For teams who need local-disk performance with Kubernetes-native orchestration, TopoLVM is a reasonable choice. For teams who need shared storage, cross-node resilience, or dynamic provisioning across a cluster, a distributed storage platform is the more appropriate path.
LVM with simplyblock
simplyblock extends the core ideas behind LVM — thin provisioning, snapshots, pooled capacity — to a distributed, Kubernetes-native architecture. Where LVM manages a pool on a single node, simplyblock manages a shared NVMe pool across nodes, accessible over NVMe/TCP without requiring local disk attachment.
Specifically, simplyblock provides capabilities that LVM cannot:
- Multi-node thin pools: Storage capacity is shared across the cluster via NVMe over TCP, so any pod can access any volume regardless of which node it runs on.
- Instant snapshots across the cluster: Snapshots are space-efficient and available cluster-wide, not limited to the local node.
- CSI-native Kubernetes integration: Dynamic PVC provisioning with no per-node configuration, no TopoLVM setup, and no node affinity constraints.
- Multi-tenant QoS: Per-volume IOPS and throughput limits that LVM has no concept of.
Teams often start with LVM or TopoLVM and migrate to simplyblock when workloads grow beyond a single node, when cross-node resilience becomes a requirement, or when the per-node operational burden of managing Volume Groups outweighs the simplicity of a shared storage platform.
Related Terms
LVM is often compared alongside other Linux and Kubernetes storage building blocks. These related terms cover the layers above and below it.
Thin Provisioning NVMe over TCP Ceph Kubernetes Persistent Volumes
Questions and Answers
What is LVM used for in Linux?
LVM is used to create flexible, resizable block storage volumes from one or more physical disks. It is commonly used to manage disk space on Linux servers, enabling administrators to grow or shrink partitions online, create copy-on-write snapshots for backups, and stripe data across multiple disks for better throughput — all without rebooting or taking services offline.
How does LVM thin provisioning work?
LVM thin provisioning creates a thin pool (a specially configured Logical Volume) from which individual thin LVs are allocated. Unlike regular LVs, thin LVs do not consume physical space at creation — space is only taken from the pool when data is actually written. This allows you to over-provision volumes relative to the pool size, similar to how cloud providers offer storage that exceeds physical capacity. The pool must be monitored to prevent it from filling up.
Can LVM be used with Kubernetes?
Yes, via the TopoLVM CSI driver, which surfaces LVM Logical Volumes as Kubernetes PersistentVolumes backed by local node storage. This works well for node-local workloads but creates pod scheduling constraints because the PV is tied to the physical node where the LV lives. For workloads that need shared access, cross-node failover, or cluster-wide provisioning, a distributed storage platform like simplyblock is more appropriate.
What is the difference between LVM and a filesystem?
LVM operates one layer below the filesystem. It manages raw block devices — pooling physical disks and presenting logical block devices to the OS. A filesystem (ext4, xfs, btrfs) is then created on top of an LV to enable file-level access. LVM handles capacity, layout, and redundancy at the block level; the filesystem handles directory structure, permissions, and file metadata.
Does LVM support RAID?
Yes. LVM includes built-in RAID support via --type raid1, raid5, raid6, and raid10 when creating Logical Volumes. This is implemented through the Linux md subsystem under the hood. LVM RAID is node-local — it mirrors or stripes across PVs on the same machine and does not provide cross-node replication.
What are the limitations of LVM for modern cloud-native environments?
LVM is a single-node tool. It has no concept of network storage, cross-node replication, or distributed capacity pools. In Kubernetes environments, this means volumes are bound to specific nodes, there is no automatic failover if a node fails, and managing separate Volume Groups on every node creates operational overhead that grows linearly with cluster size. Distributed storage platforms solve these constraints at the cost of some additional complexity.
When should I replace LVM with something like simplyblock?
Consider moving beyond LVM when your workloads require cross-node storage access, when pod rescheduling must not be constrained by local disk availability, when you need cluster-wide snapshot policies rather than per-node management, or when the storage footprint grows large enough that per-node Volume Group administration becomes a burden. simplyblock’s CSI driver and NVMe/TCP transport are a natural next step for Kubernetes-based environments that started with local LVM storage.