Kubernetes v1.30: Uwubernetes
https://kubernetes.io/blog/2024/04/17/kubernetes-v1-30-release/
Editors: Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
Announcing the release of Kubernetes v1.30: Uwubernetes, the cutest release!
Similar to previous releases, the release of Kubernetes v1.30 introduces new stable, beta, and alpha
features. The consistent delivery of top-notch releases underscores the strength of our development
cycle and the vibrant support from our community.
This release consists of 45 enhancements. Of those enhancements, 17 have graduated to Stable, 18 are
entering Beta, and 10 have graduated to Alpha.
Release theme and logo
Kubernetes v1.30: Uwubernetes
Kubernetes v1.30 makes your clusters cuter!
Kubernetes is built and released by thousands of people from all over the world and all walks of
life. Most contributors are not being paid to do this; we build it for fun, to solve a problem, to
learn something, or for the simple love of the community. Many of us found our homes, our friends,
and our careers here. The Release Team is honored to be a part of the continued growth of
Kubernetes.
For the people who built it, for the people who release it, and for the furries who keep all of our
clusters online, we present to you Kubernetes v1.30: Uwubernetes, the cutest release to date. The
name is a portmanteau of “kubernetes” and “UwU,” an emoticon used to indicate happiness or cuteness.
We’ve found joy here, but we’ve also brought joy from our outside lives that helps to make this
community as weird and wonderful and welcoming as it is. We’re so happy to share our work with you.
UwU ♥️
Improvements that graduated to stable in Kubernetes v1.30
This is a selection of some of the improvements that are now stable following the v1.30 release.
Robust VolumeManager reconstruction after kubelet restart (SIG Storage)
This is a volume manager refactoring that allows the kubelet to populate additional information
about how existing volumes are mounted during the kubelet startup. In general, this makes volume
cleanup after kubelet restart or machine reboot more robust.
This does not bring any changes for user or cluster administrators. We used the feature process and
feature gate NewVolumeManagerReconstruction to be able to fall back to the previous behavior in
case something goes wrong. Now that the feature is stable, the feature gate is locked and cannot be
disabled.
Prevent unauthorized volume mode conversion during volume restore (SIG Storage)
For Kubernetes 1.30, the control plane always prevents unauthorized changes to volume modes when
restoring a snapshot into a PersistentVolume. As a cluster administrator, you'll need to grant
permissions to the appropriate identity principals (for example: ServiceAccounts representing a
storage integration) if you need to allow that kind of change at restore time.
Warning: Action required before upgrading. The prevent-volume-mode-conversion feature flag is enabled by
default in the external-provisioner v4.0.0 and external-snapshotter v7.0.0. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless you perform the steps described in
the the "Urgent Upgrade Notes" sections for the external-provisioner
4.0.0 and the
external-snapshotter
v7.0.0.
For more information on this feature also read converting the volume mode of a
Snapshot.
Pod Scheduling Readiness (SIG Scheduling)
Pod scheduling readiness graduates to stable this release, after being promoted to beta in
Kubernetes v1.27.
This now-stable feature lets Kubernetes avoid trying to schedule a Pod that has been defined, when
the cluster doesn't yet have the resources provisioned to allow actually binding that Pod to a node.
That's not the only use case; the custom control on whether a Pod can be allowed to schedule also
lets you implement quota mechanisms, security controls, and more.
Crucially, marking these Pods as exempt from scheduling cuts the work that the scheduler would
otherwise do, churning through Pods that can't or won't schedule onto the nodes your cluster
currently has. If you have cluster
autoscaling active, using scheduling
gates doesn't just cut the load on the scheduler, it can also save money. Without scheduling gates,
the autoscaler might otherwise launch a node that doesn't need to be started.
In Kubernetes v1.30, by specifying (or removing) a Pod's .spec.schedulingGates, you can control
when a Pod is ready to be considered for scheduling. This is a stable feature and is now formally
part of the Kubernetes API definition for Pod.
Min domains in PodTopologySpread (SIG Scheduling)
The minDomains parameter for PodTopologySpread constraints graduates to stable this release, which
allows you to define the minimum number of domains. This feature is designed to be used with Cluster
Autoscaler.
If you previously attempted use and there weren't enough domains already present, Pods would be
marked as unschedulable. The Cluster Autoscaler would then provision node(s) in new domain(s), and
you'd eventually get Pods spreading over enough domains.
Go workspaces for k/k (SIG Architecture)
The Kubernetes repo now uses Go workspaces. This should not impact end users at all, but does have a
impact for developers of downstream projects. Switching to workspaces caused some breaking changes
in the flags to the various k8s.io/code-generator
tools. Downstream consumers should look at
staging/src/k8s.io/code-generator/kube_codegen.sh
to see the changes.
For full details on the changes and reasons why Go workspaces was introduced, read Using Go
workspaces in Kubernetes.
Improvements that graduated to beta in Kubernetes v1.30
This is a selection of some of the improvements that are now beta following the v1.30 release.
Node log query (SIG Windows)
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching
logs of services running on the node. To use the feature, ensure that the NodeLogQuery feature
gate is enabled for that node, and that the kubelet configuration options enableSystemLogHandler
and enableSystemLogQuery are both set to true.
Following the v1.30 release, this is now beta (you still need to enable the feature to use it,
though).
On Linux the assumption is that service logs are available via journald. On Windows the assumption
is that service logs are available in the application log provider. Logs are also available by
reading files within /var/log/ (Linux) or C:\var\log\ (Windows). For more information, see the
log query documentation.
CRD validation ratcheting (SIG API Machinery)
You need to enable the CRDValidationRatcheting feature
gate to use this behavior, which then
applies to all CustomResourceDefinitions in your cluster.
Provided you enabled the feature gate, Kubernetes implements validation racheting for
CustomResourceDefinitions. The API server is willing to accept updates to resources that are not valid
after the update, provided that each part of the resource that failed to validate was not changed by
the update operation. In other words, any invalid part of the resource that remains invalid must
have already been wrong. You cannot use this mechanism to update a valid resource so that it becomes
invalid.
This feature allows authors of CRDs to confidently add new validations to the OpenAPIV3 schema under
certain conditions. Users can update to the new schema safely without bumping the version of the
object or breaking workflows.
Contextual logging (SIG Instrumentation)
Contextual Logging advances to beta in this release, empowering developers and operators to inject
customizable, correlatable contextual details like service names and transaction IDs into logs
through WithValues and WithName. This enhancement simplifies the correlation and analysis of log
data across distributed systems, significantly improving the efficiency of troubleshooting efforts.
By offering a clearer insight into the workings of your Kubernetes environments, Contextual Logging
ensures that operational challenges are more manageable, marking a notable step forward in
Kubernetes observability.
Make Kubernetes aware of the LoadBalancer behaviour (SIG Network)
The LoadBalancerIPMode feature gate is now beta and is now enabled by default. This feature allows
you to set the .status.loadBalancer.ingress.ipMode for a Service with type set to
LoadBalancer. The .status.loadBalancer.ingress.ipMode specifies how the load-balancer IP
behaves. It may be specified only when the .status.loadBalancer.ingress.ip field is also
specified. See more details about specifying IPMode of load balancer
status.
New alpha features
Speed up recursive SELinux label change (SIG Storage)
From the v1.27 release, Kubernetes already included an optimization that sets SELinux labels on the
contents of volumes, using only constant time. Kubernetes achieves that speed up using a mount
option. The slower legacy behavior requires the container runtime to recursively walk through the
whole volumes and apply SELinux labelling individually to each file and directory; this is
especially noticable for volumes with large amount of files and directories.
Kubernetes 1.27 graduated this feature as beta, but limited it to ReadWriteOncePod volumes. The
corresponding feature gate is SELinuxMountReadWriteOncePod. It's still enabled by default and
remains beta in 1.30.
Kubernetes 1.30 extends support for SELinux mount option to all volumes as alpha, with a
separate feature gate: SELinuxMount. This feature gate introduces a behavioral change when
multiple Pods with different SELinux labels share the same volume. See
KEP
for details.
We strongly encourage users that run Kubernetes with SELinux enabled to test this feature and
provide any feedback on the KEP issue.
Feature gate
Stage in v1.30
Behavior change
SELinuxMountReadWriteOncePod
Beta
No
SELinuxMount
Alpha
Yes
Both feature gates SELinuxMountReadWriteOncePod and SELinuxM