Skip to main content

Avatar photo

Kubernetes 1.31: Day of the Storage

Aug 22nd, 2024 | 6 min read

Elli is the mascot of Kubernetes release 1.31. Elli is a cute and joyful dog, with a heart of gold and a nice sailor's cap, as a playful wink to the huge and diverse family of Kubernetes contributors. On August 13, the Kubernetes team released Kubernetes 1.31, a release also known as “Elli”. The release follows about a month after the release of 1.30 (Uwubernetes) which also marked the 10 year anniversary. You can read about all changes of this release in the Kubernetes 1.31 change log .

Kubernetes 1.31 has a lot of changes which aren’t related to storage directly, however are important to note.

AppArmor Support goes GA

AppArmor is a policy based security system for Linux applications. The idea behind AppArmor is similar to SELinux . Which one is used on your Linux system, heavily depends on the distribution. While Red Hat based distributions commonly use SELinux, are Debian or SUSE based distributions on the AppArmor side. Whatever you use, Kubernetes has you covered with support for both, and the AppArmor support just went GA (general availability).

Editor’s note: ** We had Hannes Ullman from bifrost security on our podcast, talking about how to use automatically generated application behavior profiles and AppArmor to secure your Kubernetes hosted application .

Nftables for Kube-Proxy

For a very long time, iptables was the commonly used backend for firewalling in Linux. Since Linux kernel 3.13, its successor, nftables (netfilter), is available. Kubernetes 1.31 adds the necessary backend to interact with nftables, enabling us to use the more modern alternative on our worker nodes.

Cgroups V1 in Maintenance Mode

It’s about time. Linux cgroups (cgroups means control groups) version 1, the basis (together with namespaces ) for all containerization on Linux, was added in 2007. In 2016, its successor, the much more powerful version 2 was added to the Linux kernel. The use of v1 has been on a fast and steady decline, hence the deprecation of the cgroup v1 backend in Kubernetes. While it is not yet removed, moving to the v2-based backend is highly recommended.

While there are amazing non-storage changes, I think the storage related changes are the ones that really stand out. Therefore, we want to look into those with a deeper perspective.

Personally, my favorite feature of the release is the removal of over 1.3 million lines of code. Much of it are old storage adapters (in-tree volume plugin) which predated the availability of the Container Storage Interface (Kubernetes CSI).

Persistent Volume last Phase Transition Time goes GA

Before we come to the volume plugins though, there is one other amazing feature, the last phase transition timestamp on persistent volumes (PV) . It represents the time when a PV last changed its state (like Pending, Bound, Released, Retain, Delete). Therefore, it provides a generic way to record and find this information. One use of it is alerting, where you could look at the current phase and the PV’s last transition time and alert if the PV stays in a specific state for too long (like the container has been reclaimed but the PV has not).

While available for some time, it finally made the jump to GA and is now considered stable.

Image Volume Support

With the rise of machine learning, artificial intelligence, and LLMs (large language models), providing a fixed data set to containers becomes ever more important. Oftentimes, those data sets or pre-trained models must be made available, best without making them available in the main image though. Either for simpler updates, to reduce the chances of vulnerabilities, or to simplify image creation.

For this use case, Kubernetes 1.31 adds a new VolumeSource type which directly supports OCI (Open Container Initiative) images and artifacts . While Kubernetes has strong support for OCI images, artifacts provide more object types that can influence lifecycle management, the resolution of other artifacts, or add additional validations, security checks, and configurations.

Removal of Vendor-Specific Storage Implementations

As said, much of the removed codebase are old storage provider implementations. At the time of writing, most of the providers are reimplemented as a CSI driver, and offered assisted migration during the last couple of releases.

The few remaining ones will follow soon. That means, going forward, it’ll be much easier to and much more streamlined to use any kind of storage backend, also in a mixed fashion with one pod using one and others using another storage. Storage classes for the win.

Removal of the CephFS Volume Plugin

The CephFS volume plugin was added to the codebase in . Since the release of Kubernetes 1.28 it has been deprecated. At the same time, the CSI driver variant was made available and is the recommended way to use CephFS today.

If you want to understand how to use CephFS, find the CSI driver plugin (ceph-csi) at Github .

Removal of the Ceph RBD Volume Plugin

Just like its brother, the CephFS volume plugin, the in-tree volume plugin for Ceph RBD has been removed. It was likewise marked deprecated in Kubernetes 1.28.

Today, the same CSI driver is used for both, CephFS and Ceph RBD, meaning the same Github repository provides the necessary implementation (ceph-csi).

Previous Deprecations and Removals

In earlier releases, additional in-tree volume plugins were removed from Kubernetes: awsElasticBlockStore was deprecated in 1.19 and removed in 1.27 azureDisk was deprecated in 1.19 and removed in 1.27 azureFile was deprecated in 1.21 and migration to the CSI driver is available cinder was deprecated in 1.11 and removed in 1.26 gcePersistentDisk was deprecated in 1.17 and removed in 1.28 gitRepo was deprecated in 1.11 and shouldn’t be used due to a vulnerability ( CVE-2018-11235 ). Simplyblock is working on helping to remove this volume plugin. glusterfs was deprecated in 1.25 and removed in 1.26 portworxVolume was deprecated in 1.25 and migration to the CSI driver is available vsphereVolume was deprecated in 1.26 and migration to the CSI driver is available

The Day of the Storage

Kubernetes release 1.31, like always, brings a great set of new features or moving features along their path to general availability. However, the most existing features for the last couple of releases are storage related.

With the introduction of the Container Storage Interface (CSI) to Kubernetes through the SIG Storage group marked the move to seamless integration of external storage engines without the need for a Kubernetes release to integrate them, hence mostly eliminating integration complexity with Kuberentes.

Ever since the CSI adoption has increased as has the feature set, offering more features today than any of the previous in-tree volume plugins provided. And yet, there is more to come.

Due to the large number of CSI drivers and optional features in the CSI specification, our searchable list of CSI drivers provides a quick way to find the storage implementation you need.

If you look for hyper-converged or disaggregated cloud-native storage though, look no further than simplyblock!

Our own CSI driver enables our intelligent storage orchestration, enabling you to combine the best of cloud storage (such as Amazon EBS), object storage (such as Amazon S3), local instance storage, and clustered NVMe storage for your mix of high-performance, low latency, high availability, and capacity. Paired with thin provisioning, copy-on-write clones, encryption, and automatic tiering between all of the above storage backends, simplyblock enables cloud-native storage that decreases cost while increasing the feature set .

You may also like:

Simple Block Header image

How the CSI (Container Storage Interface) Works

Simple Block Header image

Kubernetes Persistent Volumes: Best Practices & Guide

Simple Block Header image

Kubernetes: Future or Fad?