Teams looking for replicated block storage in Kubernetes without the operational weight of a full Ceph cluster frequently land on LINSTOR. LINSTOR is an open-source storage management system from LINBIT that provisions replicated block volumes backed by DRBD across Linux nodes. It integrates with Kubernetes through the Piraeus Operator and its CSI driver, giving clusters replicated, node-local NVMe or SSD storage without a dedicated storage cluster or object storage protocol.
DRBD (Distributed Replicated Block Device) has been a Linux kernel module since 2.6.33. LINSTOR manages DRBD resources across a cluster: creating logical volumes, assigning them to nodes, and maintaining the replication topology. The Piraeus Operator packages this for Kubernetes, deploying LINSTOR controllers and satellites, registering the CSI driver, and exposing StorageClasses for dynamic PVC provisioning.
How LINSTOR Works in Kubernetes
LINSTOR runs two types of processes in a cluster. The LINSTOR controller manages the cluster-wide view: resource definitions, storage pool assignments, and node state. LINSTOR satellites run on each storage node and carry out the actual volume operations — creating LVM or ZFS thin volumes on local disks and setting up DRBD replication between nodes.
The Piraeus Operator deploys both components as Kubernetes workloads. The CSI driver translates Kubernetes PVC requests into LINSTOR API calls: a PVC creation triggers the controller to pick storage nodes, create a DRBD resource across those nodes, and return the volume to the CSI driver for binding.
Replication is synchronous by default: writes are only acknowledged to the application after all replicas have committed the data. This gives LINSTOR an RPO of zero for the replicated data — a write cannot be acknowledged and then lost. The trade-off is that replication adds one network round trip (typically <1 ms on a local network) to the write path.
LINSTOR vs Ceph vs simplyblock
| Factor | LINSTOR / DRBD | Ceph / Rook | simplyblock NVMe |
|---|---|---|---|
| Replication model | Synchronous DRBD (mirror) | 3-way replica or erasure coding | Configurable replicas + erasure coding |
| Latency | Low — DRBD kernel path | Higher — RADOS overhead | Very low — NVMe/TCP or NVMe/RoCE |
| Independent scaling | No — disks must be on cluster nodes | Partial — OSD nodes can be separate | Yes — fully disaggregated |
| Kubernetes-native | Yes via Piraeus Operator | Yes via Rook operator | Yes via CSI driver |
| Operational complexity | Medium — DRBD kernel module + LINSTOR | High — MON, MGR, OSD management | Low — Kubernetes-native CSI |
| NVMe/TCP support | No — DRBD replication over TCP/RDMA | No — Ceph messenger protocol | Yes — native NVMe/TCP and NVMe/RoCE |
Table 1: LINSTOR, Ceph, and simplyblock storage comparison for Kubernetes
Evaluating replicated block storage options for Kubernetes? Simplyblock provides fully disaggregated NVMe storage over NVMe/TCP or NVMe/RoCE — replication without node-local disk requirements and independent compute and storage scaling. Compare hyperconverged vs disaggregated storage →
The Piraeus Operator
The Piraeus Operator is the Kubernetes-native packaging for LINSTOR. It was originally developed by Aggregated Intelligence and is now maintained as an open-source project with contributions from LINBIT. The operator installs LINSTOR controller and satellite pods, configures satellite storage pools (LVM, ZFS, or LVM-thin), and registers the LINSTOR CSI driver with Kubernetes.
The CSI driver exposes StorageClasses that map to LINSTOR resource groups. A resource group defines the number of replicas, the storage pool type, and any placement constraints. When a PVC references a StorageClass backed by a resource group, LINSTOR selects nodes that satisfy the placement policy and creates the DRBD-replicated volume.
The operator also handles day-2 operations: adding and removing storage nodes, updating satellite configurations, and responding to node failures by replicating data from surviving nodes.
LINSTOR Limitations for Kubernetes Environments
The node-local architecture creates scaling constraints. Storage capacity is directly tied to the number of nodes in the cluster. If a workload needs more storage but compute is adequately sized, the team must add nodes anyway — wasting compute resources to get disk capacity.
Node failures also affect availability in ways that depend on replication factor. With a 2-replica DRBD setup, a single node failure takes one replica offline. While the surviving replica stays accessible, the cluster is operating in a degraded state until the failed node recovers or a replacement is added and data is re-replicated.
LINSTOR does not support NVMe/TCP as a data transport. DRBD replicates over standard TCP or RDMA, which works well but does not give clients the sub-millisecond latency characteristics of a native NVMe/TCP fabric.
How Simplyblock Differs from LINSTOR
Simplyblock is architecturally distinct from LINSTOR in one important dimension: storage nodes are separate from compute nodes. The NVMe devices live in dedicated storage nodes; compute nodes running application pods access storage over NVMe/TCP or NVMe/RoCE across the cluster network. This is the disaggregated model — compute and storage scale independently.
For a Kubernetes team that wants replicated block storage, this means:
- Adding storage capacity does not require adding compute nodes
- Storage nodes can be right-sized for NVMe density without compute constraints
- Pod scheduling is not constrained by which nodes hold the backing disks
- Live migration and node drains work without worrying about local volume affinity
Simplyblock also exposes volumes via NVMe/TCP or NVMe/RoCE, which gives applications direct NVMe command semantics and lower latency than DRBD’s kernel mirror path over TCP for write-intensive workloads.
For a full comparison, teams evaluating LINSTOR alternatives can review the software-defined block storage and CSI driver pages.
Related Terms
These glossary pages cover the storage options and concepts relevant to teams evaluating LINSTOR and replicated Kubernetes block storage.
- Hyperconverged vs Disaggregated Storage
- LINBIT
- Software-Defined Block Storage
- CSI Driver
- Disaggregated Storage for Kubernetes
Questions and Answers
What is LINSTOR used for in Kubernetes?
LINSTOR is used to provision replicated block volumes for Kubernetes clusters where nodes have locally-attached NVMe or SSD storage. It manages DRBD replication resources across storage nodes, maintains the replication topology, and exposes volumes to pods through the Piraeus Operator and CSI driver. Teams use it as an alternative to Ceph for clusters where they want synchronous replication without the operational complexity of a full Ceph deployment. Common use cases include databases, message queues, and any stateful workload that requires persistent, replicated block storage.
How does LINSTOR compare to Ceph for Kubernetes storage?
LINSTOR and Ceph both provide replicated block storage for Kubernetes but differ significantly in architecture and operational profile. LINSTOR uses synchronous DRBD mirroring between a small number of nodes — typically 2 or 3 replicas. Ceph uses a distributed RADOS object store with 3-way replication or erasure coding across OSD nodes. LINSTOR tends to have lower write latency because the DRBD path is simpler than Ceph’s RADOS layer, but Ceph supports more advanced features like erasure coding, object storage, and CephFS. Ceph also scales to larger storage pools more gracefully, while LINSTOR is better suited to smaller clusters where simplicity and low latency are priorities.
What is the Piraeus Operator?
The Piraeus Operator is a Kubernetes operator that packages LINSTOR for Kubernetes deployments. It deploys LINSTOR controller and satellite components as Kubernetes workloads, configures storage pools on each satellite node, registers the LINSTOR CSI driver, and exposes StorageClasses for dynamic PVC provisioning. The operator also handles day-2 operations like adding storage nodes, updating configurations, and managing DRBD resource state. It is available as open-source software and makes LINSTOR consumable as a standard Kubernetes storage backend without manual LINSTOR API management.
Does LINSTOR support NVMe/TCP?
LINSTOR itself does not use NVMe/TCP as a storage transport. DRBD replicates data between nodes over standard TCP sockets or RDMA connections, depending on configuration. LINSTOR can back its volumes with NVMe devices at the local storage layer — an NVMe SSD is a valid backing device for a DRBD resource — but the inter-node data path uses DRBD’s own protocol, not NVMe/TCP. If NVMe/TCP as a client-facing transport is a requirement, purpose-built platforms like simplyblock that expose volumes natively over NVMe/TCP or NVMe/RoCE are the relevant alternative.