
1_r/devopsish
Continuing the transition from Endpoints to EndpointSlices
https://kubernetes.io/blog/2025/04/24/endpoints-deprecation/
Since the addition of EndpointSlices (KEP-752) as alpha in v1.15 and later GA in v1.21, the Endpoints API in Kubernetes has been gathering dust. New Service features like dual-stack networking and traffic distribution are only supported via the EndpointSlice API, so all service proxies, Gateway API implementations, and similar controllers have had to be ported from using Endpoints to using EndpointSlices. At this point, the Endpoints API is really only there to avoid breaking end user workloads and scripts that still make use of it.
As of Kubernetes 1.33, the Endpoints API is now officially deprecated, and the API server will return warnings to users who read or write Endpoints resources rather than using EndpointSlices.
Eventually, the plan (as documented in KEP-4974) is to change the Kubernetes Conformance criteria to no longer require that clusters run the Endpoints controller (which generates Endpoints objects based on Services and Pods), to avoid doing work that is unneeded in most modern-day clusters.
Thus, while the Kubernetes deprecation policy means that the Endpoints type itself will probably never completely go away, users who still have workloads or scripts that use the Endpoints API should start migrating them to EndpointSlices.
Notes on migrating from Endpoints to EndpointSlices
Consuming EndpointSlices rather than Endpoints
For end users, the biggest change between the Endpoints API and the EndpointSlice API is that while every Service with a selector has exactly 1 Endpoints object (with the same name as the Service), a Service may have any number of EndpointSlices associated with it:
$ kubectl get endpoints myservice Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice NAME ENDPOINTS AGE myservice 10.180.3.17:443 1h
$ kubectl get endpointslice -l kubernetes.io/service-name=myservice NAME ADDRESSTYPE PORTS ENDPOINTS AGE myservice-7vzhx IPv4 443 10.180.3.17 21s myservice-jcv8s IPv6 443 2001:db8:0123::5 21s
In this case, because the service is dual stack, it has 2 EndpointSlices: 1 for IPv4 addresses and 1 for IPv6 addresses. (The Endpoints API does not support dual stack, so the Endpoints object shows only the addresses in the cluster's primary address family.) Although any Service with multiple endpoints can have multiple EndpointSlices, there are three main cases where you will see this:
An EndpointSlice can only represent endpoints of a single IP family, so dual-stack Services will have separate EndpointSlices for IPv4 and IPv6.
All of the endpoints in an EndpointSlice must target the same ports. So, for example, if you have a set of endpoint Pods listening on port 80, and roll out an update to make them listen on port 8080 instead, then while the rollout is in progress, the Service will need 2 EndpointSlices: 1 for the endpoints listening on port 80, and 1 for the endpoints listening on port 8080.
When a Service has more than 100 endpoints, the EndpointSlice controller will split the endpoints into multiple EndpointSlices rather than aggregating them into a single excessively-large object like the Endpoints controller does.
Because there is not a predictable 1-to-1 mapping between Services and EndpointSlices, there is no way to know what the actual name of the EndpointSlice resource(s) for a Service will be ahead of time; thus, instead of fetching the EndpointSlice(s) by name, you instead ask for all EndpointSlices with a "kubernetes.io/service-name" label pointing to the Service:
$ kubectl get endpointslice -l kubernetes.io/service-name=myservice
A similar change is needed in Go code. With Endpoints, you would do something like:
// Get the Endpoints named name
in namespace
.
endpoint, err := client.CoreV1().Endpoints(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
// No Endpoints exists for the Service (yet?)
...
}
// handle other errors
...
}
// process endpoint
...
With EndpointSlices, this becomes:
// Get all EndpointSlices for Service name
in namespace
.
slices, err := client.DiscoveryV1().EndpointSlices(namespace).List(ctx,
metav1.ListOptions{LabelSelector: discoveryv1.LabelServiceName + "=" + name})
if err != nil {
// handle errors
...
} else if len(slices.Items) == 0 {
// No EndpointSlices exist for the Service (yet?)
...
}
// process slices.Items
...
Generating EndpointSlices rather than Endpoints
For people (or controllers) generating Endpoints, migrating to EndpointSlices is slightly easier, because in most cases you won't have to worry about multiple slices. You just need to update your YAML or Go code to use the new type (which organizes the information in a slightly different way than Endpoints did).
For example, this Endpoints object:
apiVersion: v1 kind: Endpoints metadata: name: myservice subsets:
- addresses:
- ip: 10.180.3.17 nodeName: node-4
- ip: 10.180.5.22 nodeName: node-9
- ip: 10.180.18.2 nodeName: node-7 notReadyAddresses:
- ip: 10.180.6.6 nodeName: node-8 ports:
- name: https protocol: TCP port: 443
would become something like:
apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: name: myservice labels: kubernetes.io/service-name: myservice addressType: IPv4 endpoints:
- addresses:
- 10.180.3.17 nodeName: node-4
- addresses:
- 10.180.5.22 nodeName: node-9
- addresses:
- 10.180.18.12 nodeName: node-7
- addresses:
- 10.180.6.6 nodeName: node-8 conditions: ready: false ports:
- name: https protocol: TCP port: 443
Some points to note:
This example uses an explicit name, but you could also use generateName and let the API server append a unique suffix. The name itself does not matter: what matters is the "kubernetes.io/service-name" label pointing back to the Service.
You have to explicitly indicate addressType: IPv4 (or IPv6).
An EndpointSlice is similar to a single element of the "subsets" array in Endpoints. An Endpoints object with multiple subsets will normally need to be expressed as multiple EndpointSlices, each with different "ports".
The endpoints and addresses fields are both arrays, but by convention, each addresses array only contains a single element. If your Service has multiple endpoints, then you need to have multiple elements in the endpoints array, each with a single element in its addresses array.
The Endpoints API lists "ready" and "not-ready" endpoints separately, while the EndpointSlice API allows each endpoint to have conditions (such as "ready: false") associated with it.
And of course, once you have ported to EndpointSlice, you can make use of EndpointSlice-specific features, such as topology hints and terminating endpoints. Consult the EndpointSlice API documentation for more information.
via Kubernetes Blog https://kubernetes.io/
April 24, 2025 at 02:30PM
Week Ending April 20, 2025
https://lwkd.info/2025/20250423
Developer News
Kubernetes 1.33 is released. Top features include supporting .kuberc, sidecars graduating, and lots of DRA features. There’s dozens of other graduations to stable, and seven more major new features in alpha. Don’t forget to check the deprecations and removals and the upgrade notes before upgrading.
Many dependencies were added, removed, or updated in 1.33 as well.
Interested in writing for the Kubernetes blog? SIG-Docs has published expanded blog guidelines to help you select appropriate topics and style.
Release Schedule
Next Deadline: Release day, 23 April
Kubernetes v1.33 is released today! We have a total of 63 tracked enhancements this cycle, higher than 1.32’s 45 enhancements. Thank you to everyone who contributed to Kubernetes v1.33 and made such a productive release cycle possible.
April patch releases are also out this week for 1.29.16, 1.30.12, 1.31.8, and 1.32.4. This is just a bugfix patch release.
Shoutouts
Wendy Ha, the Release Signal lead for v1.33 gave a shoutout to the v1.33 Release Signal Team: @Sean McGinnis, @elieser1101, @Rajalakshmi Girish, @ChengHao Yang (tico88612), @nitish for their incredible effort throughout this cycle. Folks stayed committed until the end while also contributing to other parts of the project. Huge thanks to everyone for their hard work!
Nina Polshakova: Shout out to the amazing v1.33 Docs team for a smooth Docs freeze this week and all your hard work this release! rayandas, Melony Q. (aka.cloudmelon ), Sreeram Venkitesh, Urvashi, Arvind Parekh, Michelle Nguyen, Shedrack Akintayo
Nina Polshakova: Huge shoutout to the amazing Kubernetes v1.33 Comms team: Ryota, aibarbetta, Aakanksha Bhende, Sneha Yadav, Udi Hofesh , for all your incredible work this cycle! You pulled off the biggest release announcement blog in recent memory — over 7,000 words, blowing past the previous record of ~5,000 from v1.31!
via Last Week in Kubernetes Development https://lwkd.info/
April 23, 2025 at 03:30PM
Kubernetes v1.33: Octarine
https://kubernetes.io/blog/2025/04/23/kubernetes-v1-33-release/
Editors: Agustina Barbetta, Aakanksha Bhende, Udi Hofesh, Ryota Sawada, Sneha Yadav
Similar to previous releases, the release of Kubernetes v1.33 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 64 enhancements. Of those enhancements, 18 have graduated to Stable, 20 are entering Beta, 24 have entered Alpha, and 2 are deprecated or withdrawn.
There are also several notable deprecations and removals in this release; make sure to read about those if you already run an older version of Kubernetes.
Release theme and logo
The theme for Kubernetes v1.33 is Octarine: The Color of Magic1, inspired by Terry Pratchett’s Discworld series. This release highlights the open-source magic2 that Kubernetes enables across the ecosystem.
If you’re familiar with the world of Discworld, you might recognize a small swamp dragon perched atop the tower of the Unseen University, gazing up at the Kubernetes moon above the city of Ankh-Morpork with 64 stars3 in the background.
As Kubernetes moves into its second decade, we celebrate both the wizardry of its maintainers, the curiosity of new contributors, and the collaborative spirit that fuels the project. The v1.33 release is a reminder that, as Pratchett wrote, “It’s still magic even if you know how it’s done.” Even if you know the ins and outs of the Kubernetes code base, stepping back at the end of the release cycle, you’ll realize that Kubernetes remains magical.
Kubernetes v1.33 is a testament to the enduring power of open-source innovation, where hundreds of contributors4 from around the world work together to create something truly extraordinary. Behind every new feature, the Kubernetes community works to maintain and improve the project, ensuring it remains secure, reliable, and released on time. Each release builds upon the other, creating something greater than we could achieve alone.
Octarine is the mythical eighth color, visible only to those attuned to the arcane—wizards, witches, and, of course, cats. And occasionally, someone who’s stared at IPtable rules for too long.
Any sufficiently advanced technology is indistinguishable from magic…?
It’s not a coincidence 64 KEPs (Kubernetes Enhancement Proposals) are also included in v1.33.
See the Project Velocity section for v1.33 🚀
Spotlight on key updates
Kubernetes v1.33 is packed with new features and improvements. Here are a few select updates the Release Team would like to highlight!
Stable: Sidecar containers
The sidecar pattern involves deploying separate auxiliary container(s) to handle extra capabilities in areas such as networking, logging, and metrics gathering. Sidecar containers graduate to stable in v1.33.
Kubernetes implements sidecars as a special class of init containers with restartPolicy: Always, ensuring that sidecars start before application containers, remain running throughout the pod's lifecycle, and terminate automatically after the main containers exit.
Additionally, sidecars can utilize probes (startup, readiness, liveness) to signal their operational state, and their Out-Of-Memory (OOM) score adjustments are aligned with primary containers to prevent premature termination under memory pressure.
To learn more, read Sidecar Containers.
This work was done as part of KEP-753: Sidecar Containers led by SIG Node.
Beta: In-place resource resize for vertical scaling of Pods
Workloads can be defined using APIs like Deployment, StatefulSet, etc. These describe the template for the Pods that should run, including memory and CPU resources, as well as the replica count of the number of Pods that should run. Workloads can be scaled horizontally by updating the Pod replica count, or vertically by updating the resources required in the Pods container(s). Before this enhancement, container resources defined in a Pod's spec were immutable, and updating any of these details within a Pod template would trigger Pod replacement.
But what if you could dynamically update the resource configuration for your existing Pods without restarting them?
The KEP-1287 is precisely to allow such in-place Pod updates. It was released as alpha in v1.27, and has graduated to beta in v1.33. This opens up various possibilities for vertical scale-up of stateful processes without any downtime, seamless scale-down when the traffic is low, and even allocating larger resources during startup, which can then be reduced once the initial setup is complete.
This work was done as part of KEP-1287: In-Place Update of Pod Resources led by SIG Node and SIG Autoscaling.
Alpha: New configuration option for kubectl with .kuberc for user preferences
In v1.33, kubectl introduces a new alpha feature with opt-in configuration file .kuberc for user preferences. This file can contain kubectl aliases and overrides (e.g. defaulting to use server-side apply), while leaving cluster credentials and host information in kubeconfig. This separation allows sharing the same user preferences for kubectl interaction, regardless of target cluster and kubeconfig used.
To enable this alpha feature, users can set the environment variable of KUBECTL_KUBERC=true and create a .kuberc configuration file. By default, kubectl looks for this file in ~/.kube/kuberc. You can also specify an alternative location using the --kuberc flag, for example: kubectl --kuberc /var/kube/rc.
This work was done as part of KEP-3104: Separate kubectl user preferences from cluster configs led by SIG CLI.
Features graduating to Stable
This is a selection of some of the improvements that are now stable following the v1.33 release.
Backoff limits per index for indexed Jobs
This release graduates a feature that allows setting backoff limits on a per-index basis for Indexed Jobs. Traditionally, the backoffLimit parameter in Kubernetes Jobs specifies the number of retries before considering the entire Job as failed. This enhancement allows each index within an Indexed Job to have its own backoff limit, providing more granular control over retry behavior for individual tasks. This ensures that the failure of specific indices does not prematurely terminate the entire Job, allowing the other indices to continue processing independently.
This work was done as part of KEP-3850: Backoff Limit Per Index For Indexed Jobs led by SIG Apps.
Job success policy
Using .spec.successPolicy, users can specify which pod indexes must succeed (succeededIndexes), how many pods must succeed (succeededCount), or a combination of both. This feature benefits various workloads, including simulations where partial completion is sufficient, and leader-worker patterns where only the leader's success determines the Job's overall outcome.
This work was done as part of KEP-3998: Job success/completion policy led by SIG Apps.
Bound ServiceAccount token security improvements
This enhancement introduced features such as including a unique token identifier (i.e. JWT ID Claim, also known as JTI) and node information within the tokens, enabling more precise validation and auditing. Additionally, it supports node-specific restrictions, ensuring that tokens are only usable on designated nodes, thereby reducing the risk of token misuse and potential security breaches. These improvements, now generally available, aim to enhance the overall security posture of service account tokens within Kubernetes clusters.
This work was done as part of KEP-4193: Bound service account token improvements led by SIG Auth.
Subresource support in kubectl
The --subresource argument is now generally available for kubectl subcommands such as get, patch, edit, apply and replace, allowing users to fetch and update subresources for all resources that support them. To learn more about the subresources supported, visit the kubectl reference.
This work was done as part of KEP-2590: Add subresource support to kubectl led by SIG CLI.
Multiple Service CIDRs
This enhancement introduced a new implementation of allocation logic for Service IPs. Across the whole cluster, every Service of type: ClusterIP must have a unique IP address assigned to it. Trying to create a Service with a specific cluster IP that has already been allocated will return an error. The updated IP address allocator logic uses two newly stable API objects: ServiceCIDR and IPAddress. Now generally available, these APIs allow cluster administrators to dynamically increase the number of IP addresses available for type: ClusterIP Services (by creating new ServiceCIDR objects).
This work was done as part of KEP-1880: Multiple Service CIDRs led by SIG Network.
nftables backend for kube-proxy
The nftables backend for kube-proxy is now stable, adding a new implementation that significantly improves performance and scalability for Services implementation within Kubernetes clusters. For compatibility reasons, iptables remains the default on Linux nodes. Check the migration guide if you want to try it out.
This work was done as part of KEP-3866: nftables kube-proxy backend led by SIG Network.
Topology aware routing with trafficDistribution: PreferClose
This release graduates topology-aware routing and traffic distribution to GA, which would allow us to optimize service traffic in multi-zone clusters. The topology-aware hints in EndpointSlices would enable components like kube-proxy to prioritize routing traffic to endpoints within the same zone, thereby reducing latency and cross-zone data transfer costs. Building upon this, trafficDistribution field is added to the Service specification, with the PreferClose option directing traffic to the nearest available endpoints based on network topology. This configuration enhances performance and cost-efficiency by minimizing inter-zone communication.
This work w
Ep19 - Ask Me Anything About Anything with Scott Rosenberg and Ramiro Berrelleza
There are no restrictions in this AMA session. You can ask anything about DevOps, Cloud, Kubernetes, Platform Engineering, containers, or anything else. We'll have special guests Scott Rosenberg and Ramiro Berrelleza to help us out.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Codefresh 🔗 Codefresh GitOps Cloud: https://codefresh.io ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=EZ6wodGif0Q
Replacing StatefulSets with a custom Kubernetes operator in our Postgres cloud platform, with Andrew Charlton
Discover why standard Kubernetes StatefulSets might not be sufficient for your database workloads and how custom operators can provide better solutions for stateful applications.
Andrew Charlton, Staff Software Engineer at Timescale, explains how they replaced Kubernetes StatefulSets with a custom operator called Popper for their PostgreSQL Cloud Platform. He details the technical limitations they encountered with StatefulSets and how their custom approach provides more intelligent management of database clusters.
You will learn:
Why StatefulSets fall short for managing high-availability PostgreSQL clusters, particularly around pod ordering and volume management
How Timescale's instance matching approach solves complex reconciliation challenges when managing heterogeneous database workloads
The benefits of implementing discrete, idempotent actions rather than workflows in Kubernetes operators
Real-world examples of operations that became possible with their custom operator, including volume downsizing and availability zone consolidation
Sponsor
This episode is brought to you by mirrord — run local code like in your Kubernetes cluster without deploying first.
More info
Find all the links and info for this episode here: https://ku.bz/fhZ_pNXM3
Interested in sponsoring an episode? Learn more.
via KubeFM https://kube.fm
April 22, 2025 at 08:59AM
Kubernetes Multicontainer Pods: An Overview
https://kubernetes.io/blog/2025/04/22/multi-container-pods-overview/
As cloud-native architectures continue to evolve, Kubernetes has become the go-to platform for deploying complex, distributed systems. One of the most powerful yet nuanced design patterns in this ecosystem is the sidecar pattern—a technique that allows developers to extend application functionality without diving deep into source code.
The origins of the sidecar pattern
Think of a sidecar like a trusty companion motorcycle attachment. Historically, IT infrastructures have always used auxiliary services to handle critical tasks. Before containers, we relied on background processes and helper daemons to manage logging, monitoring, and networking. The microservices revolution transformed this approach, making sidecars a structured and intentional architectural choice. With the rise of microservices, the sidecar pattern became more clearly defined, allowing developers to offload specific responsibilities from the main service without altering its code. Service meshes like Istio and Linkerd have popularized sidecar proxies, demonstrating how these companion containers can elegantly handle observability, security, and traffic management in distributed systems.
Kubernetes implementation
In Kubernetes, sidecar containers operate within the same Pod as the main application, enabling communication and resource sharing. Does this sound just like defining multiple containers along each other inside the Pod? It actually does, and this is how sidecar containers had to be implemented before Kubernetes v1.29.0, which introduced native support for sidecars. Sidecar containers can now be defined within a Pod manifest using the spec.initContainers field. What makes it a sidecar container is that you specify it with restartPolicy: Always. You can see an example of this below, which is a partial snippet of the full Kubernetes manifest:
initContainers:
- name: logshipper image: alpine:latest restartPolicy: Always command: ['sh', '-c', 'tail -F /opt/logs.txt'] volumeMounts:
- name: data mountPath: /opt
That field name, spec.initContainers may sound confusing. How come when you want to define a sidecar container, you have to put an entry in the spec.initContainers array? spec.initContainers are run to completion just before main application starts, so they’re one-off, whereas sidecars often run in parallel to the main app container. It’s the spec.initContainers with restartPolicy:Always which differs classic init containers from Kubernetes-native sidecar containers and ensures they are always up.
When to embrace (or avoid) sidecars
While the sidecar pattern can be useful in many cases, it is generally not the preferred approach unless the use case justifies it. Adding a sidecar increases complexity, resource consumption, and potential network latency. Instead, simpler alternatives such as built-in libraries or shared infrastructure should be considered first.
Deploy a sidecar when:
You need to extend application functionality without touching the original code
Implementing cross-cutting concerns like logging, monitoring or security
Working with legacy applications requiring modern networking capabilities
Designing microservices that demand independent scaling and updates
Proceed with caution if:
Resource efficiency is your primary concern
Minimal network latency is critical
Simpler alternatives exist
You want to minimize troubleshooting complexity
Four essential multi-container patterns
Init container pattern
The Init container pattern is used to execute (often critical) setup tasks before the main application container starts. Unlike regular containers, init containers run to completion and then terminate, ensuring that preconditions for the main application are met.
Ideal for:
Preparing configurations
Loading secrets
Verifying dependency availability
Running database migrations
The init container ensures your application starts in a predictable, controlled environment without code modifications.
Ambassador pattern
An ambassador container provides Pod-local helper services that expose a simple way to access a network service. Commonly, ambassador containers send network requests on behalf of a an application container and take care of challenges such as service discovery, peer identity verification, or encryption in transit.
Perfect when you need to:
Offload client connectivity concerns
Implement language-agnostic networking features
Add security layers like TLS
Create robust circuit breakers and retry mechanisms
Configuration helper
A configuration helper sidecar provides configuration updates to an application dynamically, ensuring it always has access to the latest settings without disrupting the service. Often the helper needs to provide an initial configuration before the application would be able to start successfully.
Use cases:
Fetching environment variables and secrets
Polling configuration changes
Decoupling configuration management from application logic
Adapter pattern
An adapter (or sometimes façade) container enables interoperability between the main application container and external services. It does this by translating data formats, protocols, or APIs.
Strengths:
Transforming legacy data formats
Bridging communication protocols
Facilitating integration between mismatched services
Wrap-up
While sidecar patterns offer tremendous flexibility, they're not a silver bullet. Each added sidecar introduces complexity, consumes resources, and potentially increases operational overhead. Always evaluate simpler alternatives first. The key is strategic implementation: use sidecars as precision tools to solve specific architectural challenges, not as a default approach. When used correctly, they can improve security, networking, and configuration management in containerized environments. Choose wisely, implement carefully, and let your sidecars elevate your container ecosystem.
via Kubernetes Blog https://kubernetes.io/
April 21, 2025 at 08:00PM
Mirrord Magic: Write Code Locally, See It Remotely!
Learn how to develop applications locally while integrating with remote production-like environments using mirrord. We'll demonstrate how to mirror and steal requests, connect to remote databases, and set up filtering to ensure a seamless development process without impacting others. Follow along as we configure and run mirrord, leveraging its capabilities to create an efficient and isolated development environment. This video will help you optimize your development workflow. Watch now to see mirrord in action!
Development #Kubernetes #mirrord
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/development/mirrord-magic-write-code-locally,-see-it-remotely 🔗 mirrord: https://mirrord.dev 🔗 UpCloud: https://upcloud.com
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Development Environments 02:17 Setting The Scene (Staging Environment) 06:24 Development Environments with mirrord 13:34 Stealing Requests with mirrord 15:15 Filtering Requests with mirrord 17:52 What Else? 19:32 mirrord Pros and Cons
via YouTube https://www.youtube.com/watch?v=NLa0K5mybzo
Week Ending April 13, 2025
https://lwkd.info/2025/20250416
Developer News
The next New Contributor Orientation will take place on Tuesday, April 22.
LFX Mentorship 2025 Term 2 is open for SIGs to submit projects for mentorship. To propose a mentoring project, PR it into project_ideas. If you have questions about LFX mentorship, ask in #sig-contribex.
All of the current SIG Scheduling leads are stepping down. They have nominated chairs Kensei Nakada and Maciej Skoczeń and TLs Kensei Nakada and Dominik Marciński to take their places.
Filip Křepinský, supported by SIG-Node, has proposed creating a Node Lifecycle Working Group.
Release Schedule
Next Deadline: Release day, 23 April
We are currently in Code Freeze.
Kubernetes v1.33.0-rc.1 is now avaliable for testing.
Due to the Release Managers’ availability and a conflict with the v1.33.0-rc.1 release, the April 2025 patch releases have been postponed to the next week (Tuesday, April 22).
KEP of the Week
KEP 1769: Memory Manager
This KEP defined the Memory Manager, which has enabled guaranteed memory and hugepages allocation for pods in the Guaranteed QoS class. It supports both single-NUMA and multi-NUMA strategies, ensuring memory is allocated from the minimal number of NUMA nodes. The component provides NUMA affinity hints to the Topology Manager to support optimal pod placement. While single-NUMA is a special case of multi-NUMA, both strategies are described to highlight different use cases.
This KEP was implemented in Kubernetes 1.32.
Shoutouts
Nina Polshakova: Shout out to the amazing v1.33 Docs team for a smooth Docs freeze this week and all your hard work this release! rayandas, Melony Q. (aka.cloudmelon ), Sreeram Venkitesh, Urvashi, Arvind Parekh, Michelle Nguyen, Shedrack Akintayo
via Last Week in Kubernetes Development https://lwkd.info/
April 16, 2025 at 06:00PM