When teams deploy OpenShift on bare metal, the first storage question is usually: how do we expose those locally-attached NVMe drives as Kubernetes volumes? The OpenShift Local Storage Operator (LSO) answers that question quickly. It scans node-local disks and creates PersistentVolumes from them, giving pods fast access to the physical hardware underneath — without requiring a separate storage system.
LSO is often the starting point for bare-metal and hyperconverged OpenShift storage. It exposes local NVMe SSDs as PersistentVolumes through a LocalVolume or LocalVolumeSet custom resource, and pods schedule to the node that owns the disk. That node-affinity design delivers raw device performance, but it also means that workloads are permanently tied to a specific host. Teams eventually feel that constraint when they need to drain a node, migrate a database, or grow storage independently of compute.
How the OpenShift Local Storage Operator Works
LSO installs as a standard OpenShift operator from OperatorHub. After installation, you create a LocalVolume or LocalVolumeSet resource that describes which block devices on which nodes should be provisioned. The operator discovers matching devices, formats or leaves them raw as requested, and creates PersistentVolume objects in Kubernetes.
The PVs carry a node-affinity annotation pointing to the host that owns the disk. When a pod claims that PV via a PVC, Kubernetes schedules the pod to the correct node automatically. This tight coupling is the source of both LSO’s performance advantage and its operational limitations.
LSO supports several device types including NVMe SSDs, SATA SSDs, HDDs, and whole-disk block devices. It does not manage RAID or replication — those are left to the application or a higher-level storage layer.
Limitations of the Local Storage Operator
The node-binding model creates several hard constraints that matter in production:
No replication. If a node fails, the PersistentVolume on that node is unavailable until the node recovers. There is no automatic failover and no replica on another node to resume service.
No live migration. Because the volume is physically tied to one node, a pod using an LSO volume cannot be moved to another node without downtime. This blocks rolling maintenance, node drains with stateful workloads, and horizontal pod scheduling.
No independent storage scaling. To add more storage capacity, you must add nodes with disks. Storage and compute scale together, which can lead to over-provisioning on one dimension to satisfy the other.
No thin provisioning or snapshots. LSO provides raw block devices. Advanced storage features like instant snapshots, thin provisioning, or QoS enforcement require an additional layer on top.
Need replication and live migration alongside local NVMe performance? Simplyblock extends OpenShift storage with replicated, disaggregated NVMe/TCP or NVMe/RoCE volumes that keep latency low while unlocking workload mobility. Explore simplyblock for OpenShift →
LSO vs OpenShift Data Foundation vs simplyblock
| Feature | LSO | ODF / Ceph | simplyblock NVMe |
|---|---|---|---|
| Replication | None | 3-way replica or erasure coding | Configurable replicas + erasure coding |
| Live migration | Not supported | Supported | Supported |
| Independent scaling | No — compute and storage coupled | Partial — OSD nodes separate | Yes — fully disaggregated |
| Latency | Lowest — direct device access | Higher — RADOS overhead | Near-local — NVMe/TCP or NVMe/RoCE |
| Operational complexity | Low to start; high at failure time | High — MON, MGR, OSD management | Low — Kubernetes-native CSI |
Table 1: OpenShift storage option comparison — LSO, ODF, and simplyblock NVMe
When Teams Replace or Supplement the Local Storage Operator
LSO remains a valid choice for stateless or fault-tolerant workloads that already handle data redundancy at the application layer — Kafka with its own replication, for instance. For any workload that expects the storage layer to handle replication, LSO is a stop-gap.
Teams typically revisit LSO when they need to:
- Drain nodes for kernel or firmware upgrades without taking down stateful pods
- Grow storage capacity without adding equivalent compute
- Enforce per-workload IOPS limits across a multi-tenant cluster
- Meet RPO requirements that demand storage-level replication
At that point, the usual path is either OpenShift Data Foundation (Ceph-based, hyperconverged) or a disaggregated block storage platform like simplyblock.
How Simplyblock Complements or Replaces LSO
Simplyblock connects to OpenShift through the standard CSI driver and provisions disaggregated storage for Kubernetes over NVMe/TCP or NVMe/RoCE. Storage nodes hold the actual NVMe devices; compute nodes access them remotely over the fabric.
This disaggregated model means OpenShift pods are no longer permanently tied to the node that holds the disk. Volumes can be accessed from any node that can reach the storage fabric, which enables live migration, independent scaling, and standard Kubernetes workload mobility.
For teams running OpenShift on bare metal, simplyblock preserves the performance characteristic that makes LSO attractive — fast NVMe I/O — while adding the replication, snapshots, thin provisioning, and multi-tenant QoS that LSO does not provide.
Related Terms
These glossary pages cover the storage stack that OpenShift teams use alongside or instead of the Local Storage Operator.
- OpenShift Persistent Storage
- Red Hat OpenShift Container Platform
- Disaggregated Storage for Kubernetes
- CSI Node Plugin
- Software-Defined Block Storage
Questions and Answers
What does the OpenShift Local Storage Operator do?
The OpenShift Local Storage Operator provisions PersistentVolumes from locally-attached block devices — NVMe SSDs, SATA SSDs, or whole-disk devices — on OpenShift cluster nodes. It creates LocalVolume or LocalVolumeSet resources that map physical disks to Kubernetes PVs with node-affinity annotations, so pods are automatically scheduled to the node that owns the disk. This makes raw device performance available to workloads without a separate storage system or network fabric.
What are the limitations of the OpenShift Local Storage Operator?
LSO has three hard limitations: no replication (a node failure takes the volume offline), no live migration (pods cannot move to another node without downtime), and no independent storage scaling (you must add nodes to add storage). It also lacks advanced storage features like thin provisioning, instant snapshots, and per-workload QoS. These limitations are manageable for stateless workloads, but they become operational risks for stateful services like databases or message queues.
How does the Local Storage Operator compare to OpenShift Data Foundation?
LSO is a lightweight operator that exposes raw local disks as PVs. It has low operational overhead when everything is healthy, but no recovery path when a node fails. OpenShift Data Foundation (ODF) runs a full Ceph cluster across nodes, providing replication, snapshots, and file/block/object storage from a single platform. ODF trades higher resource consumption and operational complexity for storage resilience. Teams that need replication without Ceph’s complexity often evaluate disaggregated NVMe platforms as a third option.
When should teams replace the Local Storage Operator?
The practical trigger is usually one of: a need to drain nodes for maintenance without taking down stateful workloads, a requirement for storage-level replication and automatic failover, a desire to scale storage capacity independently of compute, or a multi-tenant cluster where per-workload IOPS limits are required. If any of those requirements apply, LSO will create operational gaps that need to be solved at a higher layer — or by replacing LSO with a platform that handles them natively.