
1_r/devopsish
A Peek at Kubernetes v1.30
https://kubernetes.io/blog/2024/03/12/kubernetes-1-30-upcoming-changes/
Authors: Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
A quick look: exciting changes in Kubernetes v1.30
It's a new year and a new Kubernetes release. We're halfway through the release cycle and have quite a few interesting and exciting enhancements coming in v1.30. From brand new features in alpha, to established features graduating to stable, to long-awaited improvements, this release has something for everyone to pay attention to!
To tide you over until the official release, here's a sneak peek of the enhancements we're most excited about in this cycle!
Major changes for Kubernetes v1.30
Structured parameters for dynamic resource allocation (KEP-4381)
Dynamic resource allocation was added to Kubernetes as an alpha feature in v1.26. It defines an alternative to the traditional device-plugin API for requesting access to third-party resources. By design, dynamic resource allocation uses parameters for resources that are completely opaque to core Kubernetes. This approach poses a problem for the Cluster Autoscaler (CA) or any higher-level controller that needs to make decisions for a group of pods (e.g. a job scheduler). It cannot simulate the effect of allocating or deallocating claims over time. Only the third-party DRA drivers have the information available to do this.
Structured Parameters for dynamic resource allocation is an extension to the original implementation that addresses this problem by building a framework to support making these claim parameters less opaque. Instead of handling the semantics of all claim parameters themselves, drivers could manage resources and describe them using a specific "structured model" pre-defined by Kubernetes. This would allow components aware of this "structured model" to make decisions about these resources without outsourcing them to some third-party controller. For example, the scheduler could allocate claims rapidly without back-and-forth communication with dynamic resource allocation drivers. Work done for this release centers on defining the framework necessary to enable different "structured models" and to implement the "named resources" model. This model allows listing individual resource instances and, compared to the traditional device plugin API, adds the ability to select those instances individually via attributes.
Node memory swap support (KEP-2400)
In Kubernetes v1.30, memory swap support on Linux nodes gets a big change to how it works - with a strong emphasis on improving system stability. In previous Kubernetes versions, the NodeSwap feature gate was disabled by default, and when enabled, it used UnlimitedSwap behavior as the default behavior. To achieve better stability, UnlimitedSwap behavior (which might compromise node stability) will be removed in v1.30.
The updated, still-beta support for swap on Linux nodes will be available by default. However, the default behavior will be to run the node set to NoSwap (not UnlimitedSwap) mode. In NoSwap mode, the kubelet supports running on a node where swap space is active, but Pods don't use any of the page file. You'll still need to set --fail-swap-on=false for the kubelet to run on that node. However, the big change is the other mode: LimitedSwap. In this mode, the kubelet actually uses the page file on that node and allows Pods to have some of their virtual memory paged out. Containers (and their parent pods) do not have access to swap beyond their memory limit, but the system can still use the swap space if available.
Kubernetes' Node special interest group (SIG Node) will also update the documentation to help you understand how to use the revised implementation, based on feedback from end users, contributors, and the wider Kubernetes community.
Read the previous blog post or the node swap documentation for more details on Linux node swap support in Kubernetes.
Support user namespaces in pods (KEP-127)
User namespaces is a Linux-only feature that better isolates pods to prevent or mitigate several CVEs rated high/critical, including CVE-2024-21626, published in January 2024. In Kubernetes 1.30, support for user namespaces is migrating to beta and now supports pods with and without volumes, custom UID/GID ranges, and more!
Structured authorization configuration (KEP-3221)
Support for structured authorization configuration.) is moving to beta and will be enabled by default. This feature enables the creation of authorization chains with multiple webhooks with well-defined parameters that validate requests in a particular order and allows fine-grained control – such as explicit Deny on failures. The configuration file approach even allows you to specify CEL rules to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary invocations. The API server also automatically reloads the authorizer chain when the configuration file is modified.
You must specify the path to that authorization configuration using the --authorization-config command line argument. If you want to keep using command line flags instead of a configuration file, those will continue to work as-is. To gain access to new authorization webhook capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options in an --authorization-config file. From Kubernetes 1.30, the configuration file format is beta-level, and only requires specifying --authorization-config since the feature gate is enabled by default. An example configuration with all possible values is provided in the Authorization docs. For more details, read the Authorization docs.
Container resource based pod autoscaling (KEP-1610)
Horizontal pod autoscaling based on ContainerResource metrics will graduate to stable in v1.30. This new behavior for HorizontalPodAutoscaler allows you to configure automatic scaling based on the resource usage for individual containers, rather than the aggregate resource use over a Pod. See our previous article for further details, or read container resource metrics.
CEL for admission control (KEP-3488)
Integrating Common Expression Language (CEL) for admission control in Kubernetes introduces a more dynamic and expressive way of evaluating admission requests. This feature allows complex, fine-grained policies to be defined and enforced directly through the Kubernetes API, enhancing security and governance capabilities without compromising performance or flexibility.
CEL's addition to Kubernetes admission control empowers cluster administrators to craft intricate rules that can evaluate the content of API requests against the desired state and policies of the cluster without resorting to Webhook-based access controllers. This level of control is crucial for maintaining the integrity, security, and efficiency of cluster operations, making Kubernetes environments more robust and adaptable to various use cases and requirements. For more information on using CEL for admission control, see the API documentation for ValidatingAdmissionPolicy.
We hope you're as excited for this release as we are. Keep an eye out for the official release blog in a few weeks for more highlights!
via Kubernetes Blog https://kubernetes.io/
March 11, 2024 at 08:00PM
Developer Platform Consoles Should Be Dumb
We dive deep into the world of developer platform consoles, exploring the importance of simplicity, API-based architectures, and ...
via YouTube https://www.youtube.com/watch?v=Qy2QmJkwkP0
The TLA+ Home Page
Leslie Lamport
Tags:
via Pocket http://lamport.azurewebsites.net/tla/tla.html
March 11, 2024 at 07:52AM
Week Ending March 3, 2024
https://lwkd.info/2024/20240307
Developer News
All CI jobs must be on K8s community infra as of yesterday. While the infra team will migrate ones that are simple, other jobs that you don’t help them move may be deleted. Update your jobs now.
Monitoring dashboards for the GKE and EKS build clusters are live. Also, there was an outage in EKS jobs last week.
After a year of work led by Tim Hockin, Go Workspaces support for hacking on Kubernetes is now available, eliminating a lot of GOPATH pain.
It’s time to start working on your SIG Annual Reports, which you should find a lot shorter and easier than previous years. Note that you don’t have to be the SIG Chair to do these, they just have to review them.
Release Schedule
Next Deadline: Test Freeze, March 27th
Code Freeze is now in effect. If your KEP did not get tracked and you want to get your KEP shipped in the 1.30 release, please file an exception as soon as possible.
March Cherry Pick deadline for patch releases is the 8th.
Featured PRs
122717: KEP-4358: Custom Resource Field Selectors
Selectors in Kubernetes have long been a way to limit large API calls like List and Watch, requesting things with only specific labels or similar. In operators this can be very important to reduce memory usage of shared informer caches, as well as generally keeping apiserver load down. Some core objects extended selectors beyond labels, allowing filtering on other fields such as listing Pods based on spec.nodeName. But this set of fields was limited and could feel random if you didn’t know the specific history of the API (e.g. Pods need a node name filter because it’s the main request made by the kubelet). And it wasn’t available at all to custom type. This PR expands the system, allowing each custom type to declare selector-able fields which will be checked and indexed automatically. The declaration uses JSONPath in a very similar way to the additionalPrinterColumns feature:
selectableFields:
- jsonPath: .spec.color
- jsonPath: .spec.size
These can then be used in the API just like any other field selector:
c.List(context.Background(), &redThings, client.MatchingFields{ "spec.color": "red", })
As an alpha feature, this is behind the CustomResourceFieldSelectors feature gate.
KEP of the Week
KEP-1610: Container Resource based Autoscaling
For scaling pods based on resource usage, the HPA currently calculates the sum of all the individual container’s resource usage. This is not suitable for workloads where the containers are not related to each other. This KEP proposes that the HPA also provide an option to scale pods based on the resource usages of individual containers in a Pod. The KEP proposes adding a new ContainerResourceMetricSource metric source, with a new Container field, which will be used to identify the container for which the resources should be tracked. When there are multiple containers in a Pod, the individual resource usages of each container can change at different rates. Adding a way to specify the target gives more fine grained control over the scaling.
This KEP is in beta since v1.27 and is planned to graduate to stable in v1.30.
Other Merges
Tunnel kubectl port-forwarding through websockets
Enhanced conflict detection for Service Account and JWT
Create token duration can be zero
Reject empty usernames in OIDC authentication
OpenAPI V2 won’t publish non-matching group-version
New metrics: authorization webhook match conditions, jwt auth latency, watch cache latency
Kubeadm: list nodes needing upgrades, don’t pass duplicate default flags, better upgrade plans, WaitForAllControlPlaneComponents, upgradeConfiguration timeouts, upgradeConfiguration API
Implement strict JWT compact serialization enforcement
Don’t leak discovery documents via the Spec.Service field
Let the container runtime garbage-collect images by tagging them
Client-Go can upgrade subresource fields, and handles cache deletions
Wait for the ProviderID to be available before initializing a node
Don’t panic if nodecondition is nil
Broadcaster logging is now logging level 3
Access mode label for SELinux mounts
AuthorizationConfiguration v1alpha1 is also v1beta1
Kubelet user mapping IDs are configurable
Filter group versions in aggregated API requests
Match condition e2e tests are conformance
Kubelet gets constants from cadvisor
Promotions
PodSchedulingReadiness to GA
ImageMaximumGCAge to Beta
StructuredAuthorizationConfiguration to beta
MinDomainsInPodTopologySpread to beta
RemoteCommand Over Websockets to beta
ContainerCheckpoint to beta
ServiceAccountToken Info to beta
AggregatedDiscovery v2 to GA
PodHostIPs to GA
Version Updates
cadvisor to v0.49.0
kubedns to 1.23.0
Subprojects and Dependency Updates
kubespray to v2.24.1 Set Kubernetes v1.28.6 as the default Kubernetes version.
prometheus to v2.50.1 Fix for broken /metadata API endpoint
via Last Week in Kubernetes Development https://lwkd.info/
March 07, 2024 at 05:00PM
Apple blew $10 billion on failed car project, considered buying Tesla
Apple spent roughly $1 billion a year on its car project before canceling it last month, according to a report in Bloomberg.
Tags:
March 08, 2024 at 02:25PM
Lightning Round at Security Slam 2023
December 15, 2023, marked a significant day in the world of Kubernetes, as the community came together for a special Lightning Round of the Security Slam.
Tags:
via Pocket https://www.cncf.io/reports/lightning-round-at-security-slam-2023/
March 08, 2024 at 11:14AM
Lf
Linux Foundation - Sponsorship Program Model See main foundation page Kind of nonprofit: lf Sponsor URL: https://raw.githubusercontent.com/jmertic/lf-landscape/main/landscape.yml Levels URL: https://www.linuxfoundation.org/hubfs/lf_member_benefits_122723a.
Tags:
via Pocket https://fossfoundation.info/sponsorships/lf
March 08, 2024 at 10:53AM
Week Ending March 3, 2024
http://lwkd.info/2024/20240307
Developer News
All CI jobs must be on K8s community infra as of yesterday. While the infra team will migrate ones that are simple, other jobs that you don’t help them move may be deleted. Update your jobs now.
Monitoring dashboards for the GKE and EKS build clusters are live. Also, there was an outage in EKS jobs last week.
After a year of work led by Tim Hockin, Go Workspaces support for hacking on Kubernetes is now available, eliminating a lot of GOPATH pain.
It’s time to start working on your SIG Annual Reports, which you should find a lot shorter and easier than previous years. Note that you don’t have to be the SIG Chair to do these, they just have to review them.
Release Schedule
Next Deadline: Test Freeze, March 27th
Code Freeze is now in effect. If your KEP did not get tracked and you want to get your KEP shipped in the 1.30 release, please file an exception as soon as possible.
March Cherry Pick deadline for patch releases is the 8th.
Featured PRs
122717: KEP-4358: Custom Resource Field Selectors
Selectors in Kubernetes have long been a way to limit large API calls like List and Watch, requesting things with only specific labels or similar. In operators this can be very important to reduce memory usage of shared informer caches, as well as generally keeping apiserver load down. Some core objects extended selectors beyond labels, allowing filtering on other fields such as listing Pods based on spec.nodeName. But this set of fields was limited and could feel random if you didn’t know the specific history of the API (e.g. Pods need a node name filter because it’s the main request made by the kubelet). And it wasn’t available at all to custom type. This PR expands the system, allowing each custom type to declare selector-able fields which will be checked and indexed automatically. The declaration uses JSONPath in a very similar way to the additionalPrinterColumns feature:
selectableFields:
- jsonPath: .spec.color
- jsonPath: .spec.size
These can then be used in the API just like any other field selector:
c.List(context.Background(), &redThings, client.MatchingFields{ "spec.color": "red", })
As an alpha feature, this is behind the CustomResourceFieldSelectors feature gate.
KEP of the Week
KEP-1610: Container Resource based Autoscaling
For scaling pods based on resource usage, the HPA currently calculates the sum of all the individual container’s resource usage. This is not suitable for workloads where the containers are not related to each other. This KEP proposes that the HPA also provide an option to scale pods based on the resource usages of individual containers in a Pod. The KEP proposes adding a new ContainerResourceMetricSource metric source, with a new Container field, which will be used to identify the container for which the resources should be tracked. When there are multiple containers in a Pod, the individual resource usages of each container can change at different rates. Adding a way to specify the target gives more fine grained control over the scaling.
This KEP is in beta since v1.27 and is planned to graduate to stable in v1.30.
Other Merges
Tunnel kubectl port-forwarding through websockets
Enhanced conflict detection for Service Account and JWT
Create token duration can be zero
Reject empty usernames in OIDC authentication
OpenAPI V2 won’t publish non-matching group-version
New metrics: authorization webhook match conditions, jwt auth latency, watch cache latency
Kubeadm: list nodes needing upgrades, don’t pass duplicate default flags, better upgrade plans, WaitForAllControlPlaneComponents, upgradeConfiguration timeouts, upgradeConfiguration API
Implement strict JWT compact serialization enforcement
Don’t leak discovery documents via the Spec.Service field
Let the container runtime garbage-collect images by tagging them
Client-Go can upgrade subresource fields, and handles cache deletions
Wait for the ProviderID to be available before initializing a node
Don’t panic if nodecondition is nil
Broadcaster logging is now logging level 3
Access mode label for SELinux mounts
AuthorizationConfiguration v1alpha1 is also v1beta1
Kubelet user mapping IDs are configurable
Filter group versions in aggregated API requests
Match condition e2e tests are conformance
Kubelet gets constants from cadvisor
Promotions
PodSchedulingReadiness to GA
ImageMaximumGCAge to Beta
StructuredAuthorizationConfiguration to beta
MinDomainsInPodTopologySpread to beta
RemoteCommand Over Websockets to beta
ContainerCheckpoint to beta
ServiceAccountToken Info to beta
AggregatedDiscovery v2 to GA
PodHostIPs to GA
Version Updates
cadvisor to v0.49.0
kubedns to 1.23.0
Subprojects and Dependency Updates
kubespray to v2.24.1 Set Kubernetes v1.28.6 as the default Kubernetes version.
prometheus to v2.50.1 Fix for broken /metadata API endpoint
via Last Week in Kubernetes Development http://lwkd.info/
March 07, 2024 at 05:00PM
AWS Open Source (@AWSOpen) | Twitter
Hello World! Powering up #OSCON2017 Collaborate with us here on open source projects and releases. Build your #opensource career in #containers at #AWS!
Tags:
via Pocket https://twitter.com/awsopen
March 07, 2024 at 03:34PM