Kubernetes v1.31: Elli
https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/
Editors: Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith
Announcing the release of Kubernetes v1.31: Elli!
Similar to previous releases, the release of Kubernetes v1.31 introduces new
stable, beta, and alpha features.
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 45 enhancements.
Of those enhancements, 11 have graduated to Stable, 22 are entering Beta,
and 12 have graduated to Alpha.
Release theme and logo
The Kubernetes v1.31 Release Theme is "Elli".
Kubernetes v1.31's Elli is a cute and joyful dog, with a heart of gold and a nice sailor's cap, as a playful wink to the huge and diverse family of Kubernetes contributors.
Kubernetes v1.31 marks the first release after the project has successfully celebrated its first 10 years.
Kubernetes has come a very long way since its inception, and it's still moving towards exciting new directions with each release.
After 10 years, it is awe-inspiring to reflect on the effort, dedication, skill, wit and tiring work of the countless Kubernetes contributors who have made this a reality.
And yet, despite the herculean effort needed to run the project, there is no shortage of people who show up, time and again, with enthusiasm, smiles and a sense of pride for contributing and being part of the community.
This "spirit" that we see from new and old contributors alike is the sign of a vibrant community, a "joyful" community, if we might call it that.
Kubernetes v1.31's Elli is all about celebrating this wonderful spirit! Here's to the next decade of Kubernetes!
Highlights of features graduating to Stable
This is a selection of some of the improvements that are now stable following the v1.31 release.
AppAprmor support is now stable
Kubernetes support for AppArmor is now GA. Protect your containers using AppArmor by setting the appArmorProfile.type field in the container's securityContext.
Note that before Kubernetes v1.30, AppArmor was controlled via annotations; starting in v1.30 it is controlled using fields.
It is recommended that you should migrate away from using annotations and start using the appArmorProfile.type field.
To learn more read the AppArmor tutorial.
This work was done as a part of KEP #24, by SIG Node.
Improved ingress connectivity reliability for kube-proxy
Kube-proxy improved ingress connectivity reliability is stable in v1.31.
One of the common problems with load balancers in Kubernetes is the synchronization between the different components involved to avoid traffic drop.
This feature implements a mechanism in kube-proxy for load balancers to do connection draining for terminating Nodes exposed by services of type: LoadBalancer and externalTrafficPolicy: Cluster and establish some best practices for cloud providers and Kubernetes load balancers implementations.
To use this feature, kube-proxy needs to run as default service proxy on the cluster and the load balancer needs to support connection draining.
There are no specific changes required for using this feature, it has been enabled by default in kube-proxy since v1.30 and been promoted to stable in v1.31.
For more details about this feature please visit the Virtual IPs and Service Proxies documentation page.
This work was done as part of KEP #3836 by SIG Network.
Persistent Volume last phase transition time
Persistent Volume last phase transition time feature moved to GA in v1.31.
This feature adds a PersistentVolumeStatus field which holds a timestamp of when a PersistentVolume last transitioned to a different phase.
With this feature enabled, every PersistentVolume object will have a new field .status.lastTransitionTime, that holds a timestamp of
when the volume last transitioned its phase.
This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (Pending, Bound, or Released) after upgrading to Kubernetes v1.31.
This allows you to measure time between when a PersistentVolume moves from Pending to Bound. This can be also useful for providing metrics and SLOs.
For more details about this feature please visit the PersistentVolume documentation page.
This work was done as a part of KEP #3762 by SIG Storage.
Highlights of features graduating to Beta
This is a selection of some of the improvements that are now beta following the v1.31 release.
nftables backend for kube-proxy
The nftables backend moves to beta in v1.31, behind the NFTablesProxyMode feature gate which is now enabled by default.
The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables.
The nftables proxy mode is able to process changes to service endpoints faster and more efficiently than the iptables mode, and is also able to more efficiently process packets in the kernel (though this only
becomes noticeable in clusters with tens of thousands of services).
As of Kubernetes v1.31, the nftables mode is still relatively new, and may not be compatible with all network plugins; consult the documentation for your network plugin.
This proxy mode is only available on Linux nodes, and requires kernel 5.13 or later.
Before migrating, note that some features, especially around NodePort services, are not implemented exactly the same in nftables mode as they are in iptables mode.
Check the migration guide to see if you need to override the default configuration.
This work was done as part of KEP #3866 by SIG Network.
Changes to reclaim policy for PersistentVolumes
The Always Honor PersistentVolume Reclaim Policy feature has advanced to beta in Kubernetes v1.31.
This enhancement ensures that the PersistentVolume (PV) reclaim policy is respected even after the associated PersistentVolumeClaim (PVC) is deleted, thereby preventing the leakage of volumes.
Prior to this feature, the reclaim policy linked to a PV could be disregarded under specific conditions, depending on whether the PV or PVC was deleted first.
Consequently, the corresponding storage resource in the external infrastructure might not be removed, even if the reclaim policy was set to "Delete".
This led to potential inconsistencies and resource leaks.
With the introduction of this feature, Kubernetes now guarantees that the "Delete" reclaim policy will be enforced, ensuring the deletion of the underlying storage object from the backend infrastructure, regardless of the deletion sequence of the PV and PVC.
This work was done as a part of KEP #2644 and by SIG Storage.
Bound service account token improvements
The ServiceAccountTokenNodeBinding feature is promoted to beta in v1.31.
This feature allows requesting a token bound only to a node, not to a pod, which includes node information in claims in the token and validates the existence of the node when the token is used.
For more information, read the bound service account tokens documentation.
This work was done as part of KEP #4193 by SIG Auth.
Multiple Service CIDRs
Support for clusters with multiple Service CIDRs moves to beta in v1.31 (disabled by default).
There are multiple components in a Kubernetes cluster that consume IP addresses: Nodes, Pods and Services.
Nodes and Pods IP ranges can be dynamically changed because depend on the infrastructure or the network plugin respectively.
However, Services IP ranges are defined during the cluster creation as a hardcoded flag in the kube-apiserver.
IP exhaustion has been a problem for long lived or large clusters, as admins needed to expand, shrink or even replace entirely the assigned Service CIDR range.
These operations were never supported natively and were performed via complex and delicate maintenance operations, often causing downtime on their clusters. This new feature allows users and cluster admins to dynamically modify Service CIDR ranges with zero downtime.
For more details about this feature please visit the
Virtual IPs and Service Proxies documentation page.
This work was done as part of KEP #1880 by SIG Network.
Traffic distribution for Services
Traffic distribution for Services moves to beta in v1.31 and is enabled by default.
After several iterations on finding the best user experience and traffic engineering capabilities for Services networking, SIG Networking implemented the trafficDistribution field in the Service specification, which serves as a guideline for the underlying implementation to consider while making routing decisions.
For more details about this feature please read the
1.30 Release Blog
or visit the Service documentation page.
This work was done as part of KEP #4444 by SIG Network.
Kubernetes VolumeAttributesClass ModifyVolume
VolumeAttributesClass API is moving to beta in v1.31.
The VolumeAttributesClass provides a generic,
Kubernetes-native API for modifying dynamically volume parameters like provisioned IO.
This allows workloads to vertically scale their volumes on-line to balance cost and performance, if supported by their provider.
This feature had been alpha since Kubernetes 1.29.
This work was done as a part of KEP #3751 and lead by SIG Storage.
New features in Alpha
This is a selection of some of the improvements that are now alpha following the v1.31 release.
New DRA APIs for better accelerators and other hardware management
Kubernetes v1.31 brings an updated dynamic resource allocation (DRA) API and design.
The main focus in the update is on structured parameters because they make resource information and requests transparent to Kubernetes and clients and enable implementing features like cluster autoscaling.
DRA support in the kubelet was updated such that version skew between kubelet and the control plane is possible. With structured parameters, the scheduler allocates ResourceClaims while scheduling a pod.
Allocati