
Suggested Reads
Tracing - Feat. Jaeger and Zipkin (You Choose!, Ch. 04, Ep. 04)
Tracing - Choose Your Own Adventure: The Observability Odyssey
In this episode, we'll go through traces. The contestants are Jaeger and Zipkin.
Vote for your choice of a tool for signing artifacts at https://cloud-native.slack.com/archives/C05M2NFNVRN. If you have not already joined CNCF Slack, you can do so from https://slack.cncf.io.
This and all other episodes are available at https://www.youtube.com/playlist?list=PLyicRj904Z9-FzCPvGpVHgRQVYJpVmx3Z.
More information about the "Choose Your Own Adventure" project including the source code and links to all the videos can be found at https://github.com/vfarcic/cncf-demo.
٩( ᐛ )و Whitney's YouTube Channel → https://www.youtube.com/@wiggitywhitney
tracing #jaeger #zipkin
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ 🔗 Tracin: https://github.com/vfarcic/cncf-demo/tree/main/manuscript/tracing/README.md
via YouTube https://www.youtube.com/watch?v=mkL2hLwsxm4
Why Open Source Forking Is a Hot-Button Issue
Valkey, OpenTofu, and OpenBao are names of open source software project forks hosted by the Linux Foundation. The forks were instigated last year in response to…
September 27, 2024 at 02:41PM
via Instapaper
Week Ending September 22, 2024
https://lwkd.info/2024/20240925
Developer News
You have one day (or less) left to vote for Steering Committee members.
The call for presentations for the Maintainer Summit in Kubecon India is now open. The Maintainer summit combines a Kubernetes Contributor Summit with contributor discussions and presentations by other CNCF projects.
Release Schedule
Next Deadline: Production Readiness, October 3
It’s the second week of 1.32 and hopefully you’re hard at work on your planned Enhancements.
KEP of the Week
KEP 2837: Pod level resource limits
Currently resource allocation in PodSpec is done at the container level. The scheduler aggregates the resources requested by all the containers to find a suitable Node for the Pod. The Pod API lacks a way to specify limits at the Pod level, limiting the flexibility and ease of resource management for Pods as a whole. This KEP extends the Pod API with a resource spec at the Pod level. This new feature can be used to complement the existing resource limits and make things easier for tightly coupled applications. The KEP explains how the resource limits will be applied in different cases when Pod level and container level requests and limits are specified, as well as how the OOM score calculation will be done.
This KEP is tracked for alpha stage in the upcoming v1.32 release.
Other Merges
Prevented legacy allocator range misinitialization, preventing IP conflicts.
Extended discovery GroupManager with Group lister interface
Explicit control of metrics collection in scheduler_perf tests, supporting multi-namespace
Ensure kubeadm join/reset handles etcd members only if their URLs/IDs are unique or exist
GPU tests using Jobs, simplifying the process to verify successful completion with cupy instead of CUDA samples
Make sure to trigger Node/Delete event
Feature enhancement reinstating the Nvidia DaemonSet installation in the GCE test harness
Feature(scheduler): more fine-grained Node QHint for nodeunschedulable plugin and fixes
Optimized the Unstructured.GetManagedFields function by eliminating unnecessary deep copying of JSON value
Register missing Pod event for NodeUnschedulable plugin
Test improvements: nvidia GPU(s)
Fix setting resolvConf in drop-in kubelet config files
Make sure that the endpoints controller can reconcile the Endpoint object when it has more than 1000 addresses
Added integration tests for NodeUnschedulable, podtopologyspread & NodeResourcesFit in requeueing scenarios
Support added for API streaming
Improvisation in precision of Quantity.AsApproximateFloat64
Adds an 8-length buffer to the resourceupdates.Update channel to prevent blocking during device plugin data transmission to kubelet
If the application/json;as=Table content type is requested, the WatchList will respond with a 406 (Not Acceptable) error
Improve the kubelet test coverage
Prevent the garbage collector controller from blocking indefinitely on a cache sync failure
Ensure that mismatched hostname labels and node names do not lead to incorrect pod scheduling or failures with nodeAffinity
Test case added for parsing a WSL 2 kernel version
Guarantee that restartable and non-restartable init containers are accounted
Prevent Memory manager UnexpectedAdmissionError
spec.terminationGracePeriodSeconds should not be overwritten by MaxPodGracePeriodSeconds
Promotions
RetryGenerateName to GA
Deprecated
Remove obsolete test ClusterDns and fixes flaking
Remove node general update event from EventsToRegister when QHint is enabled
Version Updates
Update cadvisor to v0.50.0 and hcsshim versions to v0.12.6
Python Client v31.0.0
via Last Week in Kubernetes Development https://lwkd.info/
September 25, 2024 at 07:00PM
‘Not a business model’: How companies misunderstand open source
The dust-up between (formerly) open source database Redis and its fork, Valkey, highlights the fundamental difference between what businesses want and what open…
September 24, 2024 at 09:32AM
via Instapaper
Redis users considering alternatives after licensing move
Around 70 percent of Redis users are considering alternatives after the database company made a shift away from permissive open source licensing. According to a…
September 24, 2024 at 09:30AM
via Instapaper
Configuring requests & limits with the HPA at scale, with Alexandre Souza
https://kube.fm/hpa-at-scale-alex
Alexandre Souza, a senior platform engineer at Getir, shares his expertise in managing large-scale environments and configuring requests, limits, and autoscaling.
He explores the challenges of over-provisioning and under-provisioning and discusses strategies for optimizing resource allocation using tools like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).
You will learn:
How to set appropriate resource requests and limits to balance application performance and cost-efficiency in large-scale Kubernetes environments.
Strategies for implementing and configuring Horizontal Pod Autoscaler (HPA), including scaling policies and behavior management.
The differences between CPU and memory management in Kubernetes and their impact on workload performance.
Techniques for leveraging tools like KubeCost and StormForge to automate resource optimization.
Sponsor
This episode is sponsored by VictoriaMetrics - request a free trial for VictoriaMetrics enterprise today.
More info
Find all the links and info for this episode here: https://kube.fm/hpa-at-scale-alex
Interested in sponsoring an episode? Learn more.
via KubeFM https://kube.fm
September 24, 2024 at 06:00AM
Spotlight on SIG Scheduling
https://kubernetes.io/blog/2024/09/24/sig-scheduling-spotlight-2024/
In this SIG Scheduling spotlight we talked with Kensei Nakada, an approver in SIG Scheduling.
Introductions
Arvind: Hello, thank you for the opportunity to learn more about SIG Scheduling! Would you like to introduce yourself and tell us a bit about your role, and how you got involved with Kubernetes?
Kensei: Hi, thanks for the opportunity! I’m Kensei Nakada (@sanposhiho), a software engineer at Tetrate.io. I have been contributing to Kubernetes in my free time for more than 3 years, and now I’m an approver of SIG-Scheduling in Kubernetes. Also, I’m a founder/owner of two SIG subprojects, kube-scheduler-simulator and kube-scheduler-wasm-extension.
About SIG Scheduling
AP: That's awesome! You've been involved with the project since a long time. Can you provide a brief overview of SIG Scheduling and explain its role within the Kubernetes ecosystem?
KN: As the name implies, our responsibility is to enhance scheduling within Kubernetes. Specifically, we develop the components that determine which Node is the best place for each Pod. In Kubernetes, our main focus is on maintaining the kube-scheduler, along with other scheduling-related components as part of our SIG subprojects.
AP: I see, got it! That makes me curious--what recent innovations or developments has SIG Scheduling introduced to Kubernetes scheduling?
KN: From a feature perspective, there have been several enhancements to PodTopologySpread recently. PodTopologySpread is a relatively new feature in the scheduler, and we are still in the process of gathering feedback and making improvements.
Most recently, we have been focusing on a new internal enhancement called QueueingHint which aims to enhance scheduling throughput. Throughput is one of our crucial metrics in scheduling. Traditionally, we have primarily focused on optimizing the latency of each scheduling cycle. QueueingHint takes a different approach, optimizing when to retry scheduling, thereby reducing the likelihood of wasting scheduling cycles.
A: That sounds interesting! Are there any other interesting topics or projects you are currently working on within SIG Scheduling?
KN: I’m leading the development of QueueingHint which I just shared. Given that it’s a big new challenge for us, we’ve been facing many unexpected challenges, especially around the scalability, and we’re trying to solve each of them to eventually enable it by default.
And also, I believe kube-scheduler-wasm-extention (SIG sub project) that I started last year would be interesting to many people. Kubernetes has various extensions from many components. Traditionally, extensions are provided via webhooks (extender in the scheduler) or Go SDK (Scheduling Framework in the scheduler). However, these come with drawbacks - performance issues with webhooks and the need to rebuild and replace schedulers with Go SDK, posing difficulties for those seeking to extend the scheduler but lacking familiarity with it. The project is trying to introduce a new solution to this general challenge - a WebAssembly based extension. Wasm allows users to build plugins easily, without worrying about recompiling or replacing their scheduler, and sidestepping performance concerns.
Through this project, sig-scheduling has been learning valuable insights about WebAssembly's interaction with large Kubernetes objects. And I believe the experience that we’re gaining should be useful broadly within the community, beyond sig-scheduling.
A: Definitely! Now, there are currently 8 subprojects inside SIG Scheduling. Would you like to talk about them? Are there some interesting contributions by those teams you want to highlight?
KN: Let me pick up three sub projects; Kueue, KWOK and descheduler.
Kueue:
Recently, many people have been trying to manage batch workloads with Kubernetes, and in 2022, Kubernetes community founded WG-Batch for better support for such batch workloads in Kubernetes. Kueue is a project that takes a crucial role for it. It’s a job queueing controller, deciding when a job should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers (scheduler, cluster-autoscaler, kube-controller-manager, etc).
KWOK:
KWOK is a component in which you can create a cluster of thousands of Nodes in seconds. It’s mostly useful for simulation/testing as a lightweight cluster, and actually another SIG sub project kube-scheduler-simulator uses KWOK background.
descheduler:
Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes, scheduling constraints (PodAffinity, NodeAffinity, PodTopologySpread, etc) are honored only at Pod schedule, but it’s not guaranteed that the contrtaints are kept being satisfied afterwards. Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so that they’re recreated and rescheduled.
Descheduling Framework.
One very interesting on-going project, similar to Scheduling Framework in the scheduler, aiming to make descheduling logic extensible and allow maintainers to focus on building a core engine of descheduler.
AP: Thank you for letting us know! And I have to ask, what are some of your favorite things about this SIG?
KN: What I really like about this SIG is how actively engaged everyone is. We come from various companies and industries, bringing diverse perspectives to the table. Instead of these differences causing division, they actually generate a wealth of opinions. Each view is respected, and this makes our discussions both rich and productive.
I really appreciate this collaborative atmosphere, and I believe it has been key to continuously improving our components over the years.
Contributing to SIG Scheduling
AP: Kubernetes is a community-driven project. Any recommendations for new contributors or beginners looking to get involved and contribute to SIG scheduling? Where should they start?
KN: Let me start with a general recommendation for contributing to any SIG: a common approach is to look for good-first-issue. However, you'll soon realize that many people worldwide are trying to contribute to the Kubernetes repository.
I suggest starting by examining the implementation of a component that interests you. If you have any questions about it, ask in the corresponding Slack channel (e.g., #sig-scheduling for the scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation, look at issues within the SIG (e.g., sig-scheduling), where you'll find more unassigned issues compared to good-first-issue ones. You may also want to filter issues with the kind/cleanup label, which often indicates lower-priority tasks and can be starting points.
Specifically for SIG Scheduling, you should first understand the Scheduling Framework, which is the fundamental architecture of kube-scheduler. Most of the implementation is found in pkg/scheduler. I suggest starting with ScheduleOne function and then exploring deeper from there.
Additionally, apart from the main kubernetes/kubernetes repository, consider looking into sub-projects. These typically have fewer maintainers and offer more opportunities to make a significant impact. Despite being called "sub" projects, many have a large number of users and a considerable impact on the community.
And last but not least, remember contributing to the community isn’t just about code. While I talked a lot about the implementation contribution, there are many ways to contribute, and each one is valuable. One comment to an issue, one feedback to an existing feature, one review comment in PR, one clarification on the documentation; every small contribution helps drive the Kubernetes ecosystem forward.
AP: Those are some pretty useful tips! And if I may ask, how do you assist new contributors in getting started, and what skills are contributors likely to learn by participating in SIG Scheduling?
KN: Our maintainers are available to answer your questions in the #sig-scheduling Slack channel. By participating, you'll gain a deeper understanding of Kubernetes scheduling and have the opportunity to collaborate and network with maintainers from diverse backgrounds. You'll learn not just how to write code, but also how to maintain a large project, design and discuss new features, address bugs, and much more.
Future Directions
AP: What are some Kubernetes-specific challenges in terms of scheduling? Are there any particular pain points?
KN: Scheduling in Kubernetes can be quite challenging because of the diverse needs of different organizations with different business requirements. Supporting all possible use cases in kube-scheduler is impossible. Therefore, extensibility is a key focus for us. A few years ago, we rearchitected kube-scheduler with Scheduling Framework, which offers flexible extensibility for users to implement various scheduling needs through plugins. This allows maintainers to focus on the core scheduling features and the framework runtime.
Another major issue is maintaining sufficient scheduling throughput. Typically, a Kubernetes cluster has only one kube-scheduler, so its throughput directly affects the overall scheduling scalability and, consequently, the cluster's scalability. Although we have an internal performance test (scheduler_perf), unfortunately, we sometimes overlook performance degradation in less common scenarios. It’s difficult as even small changes, which look irrelevant to performance, can lead to degradation.
AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?
KN: Our primary goal is always to build and maintain extensible and stable scheduling runtime, and I bet this goal will remain unchanged forever.
As already mentioned, extensibility is
Blog: Spotlight on SIG Scheduling
https://www.kubernetes.dev/blog/2024/09/24/sig-scheduling-spotlight-2024/
In this SIG Scheduling spotlight we talked with Kensei Nakada, an approver in SIG Scheduling.
Introductions
Arvind: Hello, thank you for the opportunity to learn more about SIG Scheduling! Would you like to introduce yourself and tell us a bit about your role, and how you got involved with Kubernetes?
Kensei: Hi, thanks for the opportunity! I’m Kensei Nakada (@sanposhiho), a software engineer at Tetrate.io. I have been contributing to Kubernetes in my free time for more than 3 years, and now I’m an approver of SIG-Scheduling in Kubernetes. Also, I’m a founder/owner of two SIG subprojects, kube-scheduler-simulator and kube-scheduler-wasm-extension.
About SIG Scheduling
AP: That’s awesome! You’ve been involved with the project since a long time. Can you provide a brief overview of SIG Scheduling and explain its role within the Kubernetes ecosystem?
KN: As the name implies, our responsibility is to enhance scheduling within Kubernetes. Specifically, we develop the components that determine which Node is the best place for each Pod. In Kubernetes, our main focus is on maintaining the kube-scheduler, along with other scheduling-related components as part of our SIG subprojects.
AP: I see, got it! That makes me curious–what recent innovations or developments has SIG Scheduling introduced to Kubernetes scheduling?
KN: From a feature perspective, there have been several enhancements to PodTopologySpread recently. PodTopologySpread is a relatively new feature in the scheduler, and we are still in the process of gathering feedback and making improvements.
Most recently, we have been focusing on a new internal enhancement called QueueingHint which aims to enhance scheduling throughput. Throughput is one of our crucial metrics in scheduling. Traditionally, we have primarily focused on optimizing the latency of each scheduling cycle. QueueingHint takes a different approach, optimizing when to retry scheduling, thereby reducing the likelihood of wasting scheduling cycles.
A: That sounds interesting! Are there any other interesting topics or projects you are currently working on within SIG Scheduling?
KN: I’m leading the development of QueueingHint which I just shared. Given that it’s a big new challenge for us, we’ve been facing many unexpected challenges, especially around the scalability, and we’re trying to solve each of them to eventually enable it by default.
And also, I believe kube-scheduler-wasm-extention (SIG sub project) that I started last year would be interesting to many people. Kubernetes has various extensions from many components. Traditionally, extensions are provided via webhooks (extender in the scheduler) or Go SDK (Scheduling Framework in the scheduler). However, these come with drawbacks - performance issues with webhooks and the need to rebuild and replace schedulers with Go SDK, posing difficulties for those seeking to extend the scheduler but lacking familiarity with it. The project is trying to introduce a new solution to this general challenge - a WebAssembly based extension. Wasm allows users to build plugins easily, without worrying about recompiling or replacing their scheduler, and sidestepping performance concerns.
Through this project, sig-scheduling has been learning valuable insights about WebAssembly’s interaction with large Kubernetes objects. And I believe the experience that we’re gaining should be useful broadly within the community, beyond sig-scheduling.
A: Definitely! Now, there are currently 8 subprojects inside SIG Scheduling. Would you like to talk about them? Are there some interesting contributions by those teams you want to highlight?
KN: Let me pick up three sub projects; Kueue, KWOK and descheduler.
Kueue:
Recently, many people have been trying to manage batch workloads with Kubernetes, and in 2022, Kubernetes community founded WG-Batch for better support for such batch workloads in Kubernetes. Kueue is a project that takes a crucial role for it. It’s a job queueing controller, deciding when a job should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers (scheduler, cluster-autoscaler, kube-controller-manager, etc).
KWOK:
KWOK is a component in which you can create a cluster of thousands of Nodes in seconds. It’s mostly useful for simulation/testing as a lightweight cluster, and actually another SIG sub project kube-scheduler-simulator uses KWOK background.
descheduler:
Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes, scheduling constraints (PodAffinity, NodeAffinity, PodTopologySpread, etc) are honored only at Pod schedule, but it’s not guaranteed that the contrtaints are kept being satisfied afterwards. Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so that they’re recreated and rescheduled.
Descheduling Framework.
One very interesting on-going project, similar to Scheduling Framework in the scheduler, aiming to make descheduling logic extensible and allow maintainers to focus on building a core engine of descheduler.
AP: Thank you for letting us know! And I have to ask, what are some of your favorite things about this SIG?
KN: What I really like about this SIG is how actively engaged everyone is. We come from various companies and industries, bringing diverse perspectives to the table. Instead of these differences causing division, they actually generate a wealth of opinions. Each view is respected, and this makes our discussions both rich and productive.
I really appreciate this collaborative atmosphere, and I believe it has been key to continuously improving our components over the years.
Contributing to SIG Scheduling
AP: Kubernetes is a community-driven project. Any recommendations for new contributors or beginners looking to get involved and contribute to SIG scheduling? Where should they start?
KN: Let me start with a general recommendation for contributing to any SIG: a common approach is to look for good-first-issue. However, you’ll soon realize that many people worldwide are trying to contribute to the Kubernetes repository.
I suggest starting by examining the implementation of a component that interests you. If you have any questions about it, ask in the corresponding Slack channel (e.g., #sig-scheduling for the scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation, look at issues within the SIG (e.g., sig-scheduling), where you’ll find more unassigned issues compared to good-first-issue ones. You may also want to filter issues with the kind/cleanup label, which often indicates lower-priority tasks and can be starting points.
Specifically for SIG Scheduling, you should first understand the Scheduling Framework, which is the fundamental architecture of kube-scheduler. Most of the implementation is found in pkg/scheduler. I suggest starting with ScheduleOne function and then exploring deeper from there.
Additionally, apart from the main kubernetes/kubernetes repository, consider looking into sub-projects. These typically have fewer maintainers and offer more opportunities to make a significant impact. Despite being called “sub” projects, many have a large number of users and a considerable impact on the community.
And last but not least, remember contributing to the community isn’t just about code. While I talked a lot about the implementation contribution, there are many ways to contribute, and each one is valuable. One comment to an issue, one feedback to an existing feature, one review comment in PR, one clarification on the documentation; every small contribution helps drive the Kubernetes ecosystem forward.
AP: Those are some pretty useful tips! And if I may ask, how do you assist new contributors in getting started, and what skills are contributors likely to learn by participating in SIG Scheduling?
KN: Our maintainers are available to answer your questions in the #sig-scheduling Slack channel. By participating, you’ll gain a deeper understanding of Kubernetes scheduling and have the opportunity to collaborate and network with maintainers from diverse backgrounds. You’ll learn not just how to write code, but also how to maintain a large project, design and discuss new features, address bugs, and much more.
Future Directions
AP: What are some Kubernetes-specific challenges in terms of scheduling? Are there any particular pain points?
KN: Scheduling in Kubernetes can be quite challenging because of the diverse needs of different organizations with different business requirements. Supporting all possible use cases in kube-scheduler is impossible. Therefore, extensibility is a key focus for us. A few years ago, we rearchitected kube-scheduler with Scheduling Framework, which offers flexible extensibility for users to implement various scheduling needs through plugins. This allows maintainers to focus on the core scheduling features and the framework runtime.
Another major issue is maintaining sufficient scheduling throughput. Typically, a Kubernetes cluster has only one kube-scheduler, so its throughput directly affects the overall scheduling scalability and, consequently, the cluster’s scalability. Although we have an internal performance test (scheduler_perf), unfortunately, we sometimes overlook performance degradation in less common scenarios. It’s difficult as even small changes, which look irrelevant to performance, can lead to degradation.
AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?
KN: Our primary goal is always to build and maintain extensible and stable scheduling runtime, and I bet this goal will remain unchanged forever.
As already mentioned, extensi
Stop Writing Tedious Security Rules! Let Kubescape Do the Work
Discover how to enhance your Kubernetes security with Kubescape's Runtime Threat Detection! In this video, we dive into the Anomaly Detection Engine, which learns the normal behavior of your applications and alerts you to any deviations. We'll set up a demo environment, deploy Kubescape, and simulate both normal and malicious activities to see how it performs. Learn how to configure alerts, understand application profiles, and integrate with AlertManager for effective monitoring. Perfect for anyone looking to improve their Kubernetes security posture with open-source tools.
KubernetesSecurity #Kubescape #RuntimeThreatDetection #AnomalyDetection
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/security/stop-writing-tedious-security-rules-let-kubescape-do-the-work ➡ Kubescape: https://kubescape.io 🎬 How to Secure Kubernetes Clusters with Kubescape and Armo: https://youtu.be/ZATGiDIDBQk
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to Runtime Security 02:55 Kubescape Runtime Threat Detection 04:14 Kubescape Anomaly Detection Engine In Action 14:41 Kubescape Anomaly Detection Engine Pros and Cons
via YouTube https://www.youtube.com/watch?v=xilNX_mh6vE