Kubernetes v1.33: Octarine
https://kubernetes.io/blog/2025/04/23/kubernetes-v1-33-release/
Editors: Agustina Barbetta, Aakanksha Bhende, Udi Hofesh, Ryota Sawada, Sneha Yadav
Similar to previous releases, the release of Kubernetes v1.33 introduces new stable, beta, and alpha
features. The consistent delivery of high-quality releases underscores the strength of our
development cycle and the vibrant support from our community.
This release consists of 64 enhancements. Of those enhancements, 18 have graduated to Stable, 20 are
entering Beta, 24 have entered Alpha, and 2 are deprecated or withdrawn.
There are also several notable deprecations and removals in this
release; make sure to read about those if you already run an older version of Kubernetes.
Release theme and logo
The theme for Kubernetes v1.33 is Octarine: The Color of Magic1, inspired by Terry
Pratchett’s Discworld series. This release highlights the open-source magic2 that
Kubernetes enables across the ecosystem.
If you’re familiar with the world of Discworld, you might recognize a small swamp dragon perched
atop the tower of the Unseen University, gazing up at the Kubernetes moon above the city of
Ankh-Morpork with 64 stars3 in the background.
As Kubernetes moves into its second decade, we celebrate both the wizardry of its maintainers, the
curiosity of new contributors, and the collaborative spirit that fuels the project. The v1.33
release is a reminder that, as Pratchett wrote, “It’s still magic even if you know how it’s done.”
Even if you know the ins and outs of the Kubernetes code base, stepping back at the end of the
release cycle, you’ll realize that Kubernetes remains magical.
Kubernetes v1.33 is a testament to the enduring power of open-source innovation, where hundreds of
contributors4 from around the world work together to create something truly
extraordinary. Behind every new feature, the Kubernetes community works to maintain and improve the
project, ensuring it remains secure, reliable, and released on time. Each release builds upon the
other, creating something greater than we could achieve alone.
Octarine is the mythical eighth color, visible only to those attuned to the arcane—wizards,
witches, and, of course, cats. And occasionally, someone who’s stared at IPtable rules for too
long.
Any sufficiently advanced technology is indistinguishable from magic…?
It’s not a coincidence 64 KEPs (Kubernetes Enhancement Proposals) are also included in
v1.33.
See the Project Velocity section for v1.33 🚀
Spotlight on key updates
Kubernetes v1.33 is packed with new features and improvements. Here are a few select updates the
Release Team would like to highlight!
Stable: Sidecar containers
The sidecar pattern involves deploying separate auxiliary container(s) to handle extra capabilities
in areas such as networking, logging, and metrics gathering. Sidecar containers graduate to stable
in v1.33.
Kubernetes implements sidecars as a special class of init containers with restartPolicy: Always,
ensuring that sidecars start before application containers, remain running throughout the pod's
lifecycle, and terminate automatically after the main containers exit.
Additionally, sidecars can utilize probes (startup, readiness, liveness) to signal their operational
state, and their Out-Of-Memory (OOM) score adjustments are aligned with primary containers to
prevent premature termination under memory pressure.
To learn more, read Sidecar Containers.
This work was done as part of KEP-753: Sidecar Containers led by SIG Node.
Beta: In-place resource resize for vertical scaling of Pods
Workloads can be defined using APIs like Deployment, StatefulSet, etc. These describe the template
for the Pods that should run, including memory and CPU resources, as well as the replica count of
the number of Pods that should run. Workloads can be scaled horizontally by updating the Pod replica
count, or vertically by updating the resources required in the Pods container(s). Before this
enhancement, container resources defined in a Pod's spec were immutable, and updating any of these
details within a Pod template would trigger Pod replacement.
But what if you could dynamically update the resource configuration for your existing Pods without
restarting them?
The KEP-1287 is precisely to allow such in-place Pod updates. It was
released as alpha in v1.27, and has graduated to beta in v1.33. This opens up various possibilities
for vertical scale-up of stateful processes without any downtime, seamless scale-down when the
traffic is low, and even allocating larger resources during startup, which can then be reduced once
the initial setup is complete.
This work was done as part of KEP-1287: In-Place Update of Pod Resources
led by SIG Node and SIG Autoscaling.
Alpha: New configuration option for kubectl with .kuberc for user preferences
In v1.33, kubectl introduces a new alpha feature with opt-in configuration file .kuberc for user
preferences. This file can contain kubectl aliases and overrides (e.g. defaulting to use
server-side apply), while leaving cluster
credentials and host information in kubeconfig. This separation allows sharing the same user
preferences for kubectl interaction, regardless of target cluster and kubeconfig used.
To enable this alpha feature, users can set the environment variable of KUBECTL_KUBERC=true and
create a .kuberc configuration file. By default, kubectl looks for this file in
~/.kube/kuberc. You can also specify an alternative location using the --kuberc flag, for
example: kubectl --kuberc /var/kube/rc.
This work was done as part of
KEP-3104: Separate kubectl user preferences from cluster configs led by
SIG CLI.
Features graduating to Stable
This is a selection of some of the improvements that are now stable following the v1.33 release.
Backoff limits per index for indexed Jobs
This release graduates a feature that allows setting backoff limits on a per-index basis for Indexed
Jobs. Traditionally, the backoffLimit parameter in Kubernetes Jobs specifies the number of retries
before considering the entire Job as failed. This enhancement allows each index within an Indexed
Job to have its own backoff limit, providing more granular control over retry behavior for
individual tasks. This ensures that the failure of specific indices does not prematurely terminate
the entire Job, allowing the other indices to continue processing independently.
This work was done as part of
KEP-3850: Backoff Limit Per Index For Indexed Jobs led by SIG Apps.
Job success policy
Using .spec.successPolicy, users can specify which pod indexes must succeed (succeededIndexes),
how many pods must succeed (succeededCount), or a combination of both. This feature benefits
various workloads, including simulations where partial completion is sufficient, and leader-worker
patterns where only the leader's success determines the Job's overall outcome.
This work was done as part of KEP-3998: Job success/completion policy led
by SIG Apps.
Bound ServiceAccount token security improvements
This enhancement introduced features such as including a unique token identifier (i.e.
JWT ID Claim, also known as JTI) and
node information within the tokens, enabling more precise validation and auditing. Additionally, it
supports node-specific restrictions, ensuring that tokens are only usable on designated nodes,
thereby reducing the risk of token misuse and potential security breaches. These improvements, now
generally available, aim to enhance the overall security posture of service account tokens within
Kubernetes clusters.
This work was done as part of
KEP-4193: Bound service account token improvements led by SIG Auth.
Subresource support in kubectl
The --subresource argument is now generally available for kubectl subcommands such as get,
patch, edit, apply and replace, allowing users to fetch and update subresources for all
resources that support them. To learn more about the subresources supported, visit the
kubectl reference.
This work was done as part of
KEP-2590: Add subresource support to kubectl led by SIG CLI.
Multiple Service CIDRs
This enhancement introduced a new implementation of allocation logic for Service IPs. Across the
whole cluster, every Service of type: ClusterIP must have a unique IP address assigned to it.
Trying to create a Service with a specific cluster IP that has already been allocated will return an
error. The updated IP address allocator logic uses two newly stable API objects: ServiceCIDR and
IPAddress. Now generally available, these APIs allow cluster administrators to dynamically
increase the number of IP addresses available for type: ClusterIP Services (by creating new
ServiceCIDR objects).
This work was done as part of KEP-1880: Multiple Service CIDRs led by SIG
Network.
nftables backend for kube-proxy
The nftables backend for kube-proxy is now stable, adding a new implementation that significantly
improves performance and scalability for Services implementation within Kubernetes clusters. For
compatibility reasons, iptables remains the default on Linux nodes. Check the
migration guide
if you want to try it out.
This work was done as part of KEP-3866: nftables kube-proxy backend led
by SIG Network.
Topology aware routing with trafficDistribution: PreferClose
This release graduates topology-aware routing and traffic distribution to GA, which would allow us
to optimize service traffic in multi-zone clusters. The topology-aware hints in EndpointSlices would
enable components like kube-proxy to prioritize routing traffic to endpoints within the same zone,
thereby reducing latency and cross-zone data transfer costs. Building upon this,
trafficDistribution field is added to the Service specification, with the PreferClose option
directing traffic to the nearest available endpoints based on network topology. This configuration
enhances performance and cost-efficiency by minimizing inter-zone communication.
This work w