
1_r/devopsish
Ep04 - Ask Me Anything About DevOps, Cloud, Kubernetes, Platform Engineering,... w/Scott Rosenberg
There are no restrictions in this AMA session. You can ask anything about DevOps, Cloud, Kubernetes, Platform Engineering, containers, or anything else. We'll have a special guest Scott Rosenberg to help us out.
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=loKy3At3OyU
Mastering Kubernetes: Volumes (Persistent Volumes and Claims, ConfigMaps, Secrets, etc)
Dive into Kubernetes Volumes in this comprehensive tutorial! We'll cover local volumes like emptyDir, CSI Drivers, Storage Classes, Persistent Volumes, Persistent Volume Claims, ConfigMaps, and Secrets. Learn how to use these essential components in Kubernetes for efficient storage management. Understanding volumes is crucial whether you're developing locally or deploying in production. Follow along with practical examples and commands to master Kubernetes volumes. Perfect for beginners and experts alike.
Kubernetes #Volumes #PersistentStorage #KubernetesTutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Zoho Corp 🔗 https://manageengine.com 🔗 https://www.zohocorp.com ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/kubernetes/mastering-kubernetes-volumes-emptydir,-csi-drivers,-storage-classes,-persistent-volumes-and-persistent-volume-claims,-configmaps,-and-secrets 🔗 Kubernetes: https://kubernetes.io 🎬 Mastering Kubernetes: Workloads APIs (Deployment, StatefulSet, ReplicaSet, Pod, etc.): https://youtu.be/U6weXlzQxoY
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Kubernetes Volumes Intro 00:34 Kubernetes Volumes (emptyDir) 01:17 ManageEngine (sponsort) 02:09 Kubernetes Volumes (emptyDir) (cont.) 05:03 Kubernetes Container Storage Interface (CSI) Drivers and Storage Classes 08:00 Kubernetes Persistent Volumes and Persistent Volume Claims 13:44 Kubernetes Config Maps 17:12 Kubernetes Secrets
via YouTube https://www.youtube.com/watch?v=yYQXKiiJzS8
Control Plane - What Are The Main Benefits of Implementing It?
What are the main benefits of developing and implementing Control Planes?
controlplane #kubernetes
▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=TW_UzIilfIE
Canary Deployments - What Are Examples Where It Can Be Used Other Than for Cloud Resources?
What are examples where Crossplane can (and should) be used other than for Cloud resources?
crossplane #kubernetes
▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=TlPvtY9Te9Q
Canary Deployments - Is It a Good Practice to Combine Argo Rollouts with Ingress and Istio?
Canary Deployments - Is It a Good Practice to Combine Argo Rollouts with Ingress and Istio?
progressivedelivery #canarydeployments #kubernetes
▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=uDbJ0VgvzNc
Week Ending December 15, 2024
https://lwkd.info/2024/20241216
Developer News
This will be the last LWKD issue of the year. Publication will resume in 2025 with the January 5th edition.
Submissions for the maintainer summit at Kubecon London are due January 12th. The CfP for the main tracks in Kubecon China, Kubecon India, and Kubecon Japan are now open.
Release Schedule
Next Deadline: 1.33 Cycle Begins, January ??
The 1.33 development cycle will begin in early January, but a specific schedule has not been set.
Featured PRs
128718 FG:InPlacePodVerticalScaling- Enable resizing containers without limits
This PR fixes critical bugs in the pod resize code, specifically addressing cases where containers lack resource limits. It ensures proper handling of these scenarios, enabling in-place vertical scaling for such containers. Also, the PR enhances test coverage to prevent regressions, marking a step forward for reliable container resizing in Kubernetes.
KEP of the Week
KEP-3221: Structured Authorization Configuration
Currently, kube-apiserver configures its authorization chain using --authorization-* flags, limiting admins to a single webhook via --authorization-modes. This restricts creating ordered authorization chains with multiple webhooks. This proposal suggests a structured configuration for defining the authorization chain, supporting multiple webhooks with fine-grained controls, including an explicit Deny authorizer.
This KEP is tracked for alpha release in the ongoing v1.32 cycle.
Other Merges
kubectl apply now coerces null values for labels and annotations in manifests to empty string values
Configure watch cache history window based on request timeout
kubectl: improved test coverage for cordon command
Removed the limitation on exposing port 10250 externally in service
kube-proxy extends the schema of metrics/ endpoints to incorporate info about corresponding IP family
Fix for data race in CBOR serializer’s custom marshaler type cache
kubelet: Improvements to reboot event reporting
kubeadm: removed preflight check for ip, iptables, ethtool and tc on Linux nodes
docs: example added for set-based requirement for -l/–selector flag
Drop use of winreadlinkvolume godebug option
kubelet: fix for issue mounting CSI volumes on Windows nodes in 1.32.0 release candidates
Added validation to versioned feature specs
Added kubelet validation for containerLogMaxFiles
scheduler: Renamed UpdatePodTolerations for code style consistency
kubeadm: Fix to not read kubeconfig from disk repeatedly in the init phase
Added a /flagz endpoint for kube-proxy
Adjustments to throughput threshold for new tests based on historical times to avoid flakiness.
Record dataTimestamp from external signers at float granularity
Use autoscalingv2 in kubectl autoscale
DRA: validations for labels in node selectors
Fix for memory leak in kube-proxy EndpointSliceCache
FG:InPlacePodVerticalScaling Remove ResizePolicy defaulting
Use generic sets rather than deprecated sets.String
Test EndpointSlice in dual-stack e2e tests
Fix for linting issue in TestNodeDeletionReleaseCIDR
Cleanup for ServiceChangeTracker and EndpointsChangeTracker
Improvements to validation for missing storedVersion
Documententation added for the existence of nftables as a kube-proxy mode
Fixed kubectl wait –for=create behavior with label selectors
Added non graceful shutdown integration test
Added validation for NodeSelectorRequirement’s values
Fix to prevent unnecessary resolving of iscsi/fc devices
Optionally set the User.UID from an x509 client cert
Fine-grained QHints for interpodaffinity plugin
Allow ContainerResource calculations to continue with missing metrics like Resource calculations
Added warning for duplicate port name definition
Deprecated
Removed support for v1alpha1 version of ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding API kinds.
kube-apiserver: inactive serving code is removed for authentication.k8s.io/v1alpha1 APIs
Deprecated pod_scheduling_duration_seconds metric is removed
Version Updates
Bump kubedns and nodelocaldns to 1.24.0
Bump kube-openapi
x/crypto/ssh dependency to v0.31.0
cri-tools to v1.32.0
Update publishing-bot rules to Go 1.22.9
hnslib to v0.0.8
Shoutouts
Big 1.32 Shoutout from Federico Muñoz: With Kubernetes v1.32 out, I want to acknowledge those that made it possible: my Release Lead shadows @Nina Polshakova @Sreeram Venkitesh @Mohammad Reza Saleh @Vyom Yadav, Enhancements Lead @tjons and shadows @Jenny Shu @Sepideh @Dipesh, Release Signal lead @Drew Hagen, and shadows @Amim Knabben @ChengHao Yang (tico88612) @Wendy Ha @sbaumer, Docs lead @dchan, and shadows @anshuman @Rod @James Spurin @Shedrack Akintayo @Michelle Nguyen, Release Notes lead @satyampsoni, and shadows @Augustin Tsang @jefftrojan @Lavish Pal @Melony Q. (aka.cloudmelon ) @rayandas @Sneha, Comms lead @Matteo, and shadows @Edith @Rashan @Ryota @Will-I-Am, Release Managers @jimangel and @Mickey and our EA @Kat Cosgrove (plus @Grace Nguyen from SIG Release). The success of this is much more the result of all your tireless work than anything else.
via Last Week in Kubernetes Development https://lwkd.info/
December 16, 2024 at 05:00PM
What Do I Hate About Kubernetes?
What are the things I dislike about Kubernetes? What should be improved?
kubernetes #k8s
▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=q9cv_Wa1cfw
Contact Chris Short
https://chrisshort.net/contact/
Contact Chris Short
via Chris Short https://chrisshort.net/
December 17, 2024 at 07:00PM
Kubernetes 1.32: Moving Volume Group Snapshots to Beta
https://kubernetes.io/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/
Volume group snapshots were introduced as an Alpha feature with the Kubernetes 1.27 release. The recent release of Kubernetes v1.32 moved that support to beta. The support for volume group snapshots relies on a set of extension APIs for group snapshots. These APIs allow users to take crash consistent snapshots for a set of volumes. Behind the scenes, Kubernetes uses a label selector to group multiple PersistentVolumeClaims for snapshotting. A key aim is to allow you restore that set of snapshots to new volumes and recover your workload based on a crash consistent recovery point.
This new feature is only supported for CSI volume drivers.
An overview of volume group snapshots
Some storage systems provide the ability to create a crash consistent snapshot of multiple volumes. A group snapshot represents copies made from multiple volumes, that are taken at the same point-in-time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots).
Why add volume group snapshots to Kubernetes?
The Kubernetes volume plugin system already provides a powerful abstraction that automates the provisioning, attaching, mounting, resizing, and snapshotting of block and file storage.
Underpinning all these features is the Kubernetes goal of workload portability: Kubernetes aims to create an abstraction layer between distributed applications and underlying clusters so that applications can be agnostic to the specifics of the cluster they run on and application deployment requires no cluster specific knowledge.
There was already a VolumeSnapshot API that provides the ability to take a snapshot of a persistent volume to protect against data loss or data corruption. However, there are other snapshotting functionalities not covered by the VolumeSnapshot API.
Some storage systems support consistent group snapshots that allow a snapshot to be taken from multiple volumes at the same point-in-time to achieve write order consistency. This can be useful for applications that contain multiple volumes. For example, an application may have data stored in one volume and logs stored in another volume. If snapshots for the data volume and the logs volume are taken at different times, the application will not be consistent and will not function properly if it is restored from those snapshots when a disaster strikes.
It is true that you can quiesce the application first, take an individual snapshot from each volume that is part of the application one after the other, and then unquiesce the application after all the individual snapshots are taken. This way, you would get application consistent snapshots.
However, sometimes the application quiesce can be so time consuming that you want to do it less frequently, or it may not be possible to quiesce an application at all. For example, a user may want to run weekly backups with application quiesce and nightly backups without application quiesce but with consistent group support which provides crash consistency across all volumes in the group.
Kubernetes APIs for volume group snapshots
Kubernetes' support for volume group snapshots relies on three API kinds that are used for managing snapshots:
VolumeGroupSnapshot
Created by a Kubernetes user (or perhaps by your own automation) to request creation of a volume group snapshot for multiple persistent volume claims. It contains information about the volume group snapshot operation such as the timestamp when the volume group snapshot was taken and whether it is ready to use. The creation and deletion of this object represents a desire to create or delete a cluster resource (a group snapshot).
VolumeGroupSnapshotContent
Created by the snapshot controller for a dynamically created VolumeGroupSnapshot. It contains information about the volume group snapshot including the volume group snapshot ID. This object represents a provisioned resource on the cluster (a group snapshot). The VolumeGroupSnapshotContent object binds to the VolumeGroupSnapshot for which it was created with a one-to-one mapping.
VolumeGroupSnapshotClass
Created by cluster administrators to describe how volume group snapshots should be created, including the driver information, the deletion policy, etc.
These three API kinds are defined as CustomResourceDefinitions (CRDs). These CRDs must be installed in a Kubernetes cluster for a CSI Driver to support volume group snapshots.
What components are needed to support volume group snapshots
Volume group snapshots are implemented in the external-snapshotter repository. Implementing volume group snapshots meant adding or changing several components:
Added new CustomResourceDefinitions for VolumeGroupSnapshot and two supporting APIs.
Volume group snapshot controller logic is added to the common snapshot controller.
Adding logic to make CSI calls into the snapshotter sidecar controller.
The volume snapshot controller and CRDs are deployed once per cluster, while the sidecar is bundled with each CSI driver.
Therefore, it makes sense to deploy the volume snapshot controller and CRDs as a cluster addon.
The Kubernetes project recommends that Kubernetes distributors bundle and deploy the volume snapshot controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).
What's new in Beta?
The VolumeGroupSnapshot feature in CSI spec moved to GA in the v1.11.0 release.
The snapshot validation webhook was deprecated in external-snapshotter v8.0.0 and it is now removed. Most of the validation webhook logic was added as validation rules into the CRDs. Minimum required Kubernetes version is 1.25 for these validation rules. One thing in the validation webhook not moved to CRDs is the prevention of creating multiple default volume snapshot classes and multiple default volume group snapshot classes for the same CSI driver. With the removal of the validation webhook, an error will still be raised when dynamically provisioning a VolumeSnapshot or VolumeGroupSnapshot when multiple default volume snapshot classes or multiple default volume group snapshot classes for the same CSI driver exist.
The enable-volumegroup-snapshot flag in the snapshot-controller and the CSI snapshotter sidecar has been replaced by a feature gate. Since VolumeGroupSnapshot is a new API, the feature moves to Beta but the feature gate is disabled by default. To use this feature, enable the feature gate by adding the flag --feature-gates=CSIVolumeGroupSnapshot=true when starting the snapshot-controller and the CSI snapshotter sidecar.
The logic to dynamically create the VolumeGroupSnapshot and its corresponding individual VolumeSnapshot and VolumeSnapshotContent objects are moved from the CSI snapshotter to the common snapshot-controller. New RBAC rules are added to the common snapshot-controller and some RBAC rules are removed from the CSI snapshotter sidecar accordingly.
How do I use Kubernetes volume group snapshots
Creating a new group snapshot with Kubernetes
Once a VolumeGroupSnapshotClass object is defined and you have volumes you want to snapshot together, you may request a new group snapshot by creating a VolumeGroupSnapshot object.
The source of the group snapshot specifies whether the underlying group snapshot should be dynamically created or if a pre-existing VolumeGroupSnapshotContent should be used.
A pre-existing VolumeGroupSnapshotContent is created by a cluster administrator. It contains the details of the real volume group snapshot on the storage system which is available for use by cluster users.
One of the following members in the source of the group snapshot must be set.
selector - a label query over PersistentVolumeClaims that are to be grouped together for snapshotting. This selector will be used to match the label added to a PVC.
volumeGroupSnapshotContentName - specifies the name of a pre-existing VolumeGroupSnapshotContent object representing an existing volume group snapshot.
Dynamically provision a group snapshot
In the following example, there are two PVCs.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-0 Bound pvc-6e1f7d34-a5c5-4548-b104-01e72c72b9f2 100Mi RWO csi-hostpath-sc <unset> 2m15s pvc-1 Bound pvc-abc640b3-2cc1-4c56-ad0c-4f0f0e636efa 100Mi RWO csi-hostpath-sc <unset> 2m7s
Label the PVCs.
% kubectl label pvc pvc-0 group=myGroup persistentvolumeclaim/pvc-0 labeled
% kubectl label pvc pvc-1 group=myGroup persistentvolumeclaim/pvc-1 labeled
For dynamic provisioning, a selector must be set so that the snapshot controller can find PVCs with the matching labels to be snapshotted together.
apiVersion: groupsnapshot.storage.k8s.io/v1beta1 kind: VolumeGroupSnapshot metadata: name: snapshot-daily-20241217 namespace: demo-namespace spec: volumeGroupSnapshotClassName: csi-groupSnapclass source: selector: matchLabels: group: myGroup
In the VolumeGroupSnapshot spec, a user can specify the VolumeGroupSnapshotClass which has the information about which CSI driver should be used for creating the group snapshot. A VolumGroupSnapshotClass is required for dynamic provisioning.
apiVersion: groupsnapshot.storage.k8s.io/v1beta1 kind: VolumeGroupSnapshotClass metadata: name: csi-groupSnapclass annotations: kubernetes.io/description: "Example group snapshot class" driver: example.csi.k8s.io deletionPolicy: Delete
As a result of the volume group snapshot creation, a corresponding VolumeGroupSnapshotContent object will be created with a volumeGroupSnapshotHandle pointing to a resource on the storage system.
Two individual volume snapshots will be created as part of the volume group snapshot creation.
NAME READYTOUSE SOURCEPVC RESTORESIZE SNAPSHOTCONTENT AGE snapshot-0962a745b2bf930bb385b7b50c9b08a
Are databases in Kubernetes production-ready?
Should we run databases in Kubernetes? Are they production-ready?
kubernetes #database
▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=9GJTZqkRRGM
Enhancing Kubernetes API Server Efficiency with API Streaming
https://kubernetes.io/blog/2024/12/17/kube-apiserver-api-streaming/
Managing Kubernetes clusters efficiently is critical, especially as their size is growing. A significant challenge with large clusters is the memory overhead caused by list requests.
In the existing implementation, the kube-apiserver processes list requests by assembling the entire response in-memory before transmitting any data to the client. But what if the response body is substantial, say hundreds of megabytes? Additionally, imagine a scenario where multiple list requests flood in simultaneously, perhaps after a brief network outage. While API Priority and Fairness has proven to reasonably protect kube-apiserver from CPU overload, its impact is visibly smaller for memory protection. This can be explained by the differing nature of resource consumption by a single API request - the CPU usage at any given time is capped by a constant, whereas memory, being uncompressible, can grow proportionally with the number of processed objects and is unbounded. This situation poses a genuine risk, potentially overwhelming and crashing any kube-apiserver within seconds due to out-of-memory (OOM) conditions. To better visualize the issue, let's consider the below graph.
The graph shows the memory usage of a kube-apiserver during a synthetic test. (see the synthetic test section for more details). The results clearly show that increasing the number of informers significantly boosts the server's memory consumption. Notably, at approximately 16:40, the server crashed when serving only 16 informers.
Why does kube-apiserver allocate so much memory for list requests?
Our investigation revealed that this substantial memory allocation occurs because the server before sending the first byte to the client must:
fetch data from the database,
deserialize the data from its stored format,
and finally construct the final response by converting and serializing the data into a client requested format
This sequence results in significant temporary memory consumption. The actual usage depends on many factors like the page size, applied filters (e.g. label selectors), query parameters, and sizes of individual objects.
Unfortunately, neither API Priority and Fairness nor Golang's garbage collection or Golang memory limits can prevent the system from exhausting memory under these conditions. The memory is allocated suddenly and rapidly, and just a few requests can quickly deplete the available memory, leading to resource exhaustion.
Depending on how the API server is run on the node, it might either be killed through OOM by the kernel when exceeding the configured memory limits during these uncontrolled spikes, or if limits are not configured it might have even worse impact on the control plane node. And worst, after the first API server failure, the same requests will likely hit another control plane node in an HA setup with probably the same impact. Potentially a situation that is hard to diagnose and hard to recover from.
Streaming list requests
Today, we're excited to announce a major improvement. With the graduation of the watch list feature to beta in Kubernetes 1.32, client-go users can opt-in (after explicitly enabling WatchListClient feature gate) to streaming lists by switching from list to (a special kind of) watch requests.
Watch requests are served from the watch cache, an in-memory cache designed to improve scalability of read operations. By streaming each item individually instead of returning the entire collection, the new method maintains constant memory overhead. The API server is bound by the maximum allowed size of an object in etcd plus a few additional allocations. This approach drastically reduces the temporary memory usage compared to traditional list requests, ensuring a more efficient and stable system, especially in clusters with a large number of objects of a given type or large average object sizes where despite paging memory consumption used to be high.
Building on the insight gained from the synthetic test (see the synthetic test, we developed an automated performance test to systematically evaluate the impact of the watch list feature. This test replicates the same scenario, generating a large number of Secrets with a large payload, and scaling the number of informers to simulate heavy list request patterns. The automated test is executed periodically to monitor memory usage of the server with the feature enabled and disabled.
The results showed significant improvements with the watch list feature enabled. With the feature turned on, the kube-apiserver’s memory consumption stabilized at approximately 2 GB. By contrast, with the feature disabled, memory usage increased to approximately 20GB, a 10x increase! These results confirm the effectiveness of the new streaming API, which reduces the temporary memory footprint.
Enabling API Streaming for your component
Upgrade to Kubernetes 1.32. Make sure your cluster uses etcd in version 3.4.31+ or 3.5.13+. Change your client software to use watch lists. If your client code is written in Golang, you'll want to enable WatchListClient for client-go. For details on enabling that feature, read Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control.
What's next?
In Kubernetes 1.32, the feature is enabled in kube-controller-manager by default despite its beta state. This will eventually be expanded to other core components like kube-scheduler or kubelet; once the feature becomes generally available, if not earlier. Other 3rd-party components are encouraged to opt-in to the feature during the beta phase, especially when they are at risk of accessing a large number of resources or kinds with potentially large object sizes.
For the time being, API Priority and Fairness assigns a reasonable small cost to list requests. This is necessary to allow enough parallelism for the average case where list requests are cheap enough. But it does not match the spiky exceptional situation of many and large objects. Once the majority of the Kubernetes ecosystem has switched to watch list, the list cost estimation can be changed to larger values without risking degraded performance in the average case, and with that increasing the protection against this kind of requests that can still hit the API server in the future.
The synthetic test
In order to reproduce the issue, we conducted a manual test to understand the impact of list requests on kube-apiserver memory usage. In the test, we created 400 Secrets, each containing 1 MB of data, and used informers to retrieve all Secrets.
The results were alarming, only 16 informers were needed to cause the test server to run out of memory and crash, demonstrating how quickly memory consumption can grow under such conditions.
Special shout out to @deads2k for his help in shaping this feature.
via Kubernetes Blog https://kubernetes.io/
December 16, 2024 at 07:00PM