1_r/devopsish

1_r/devopsish

54515 bookmarks
Custom sorting
Evolving our self-hosted offering and license model
Evolving our self-hosted offering and license model

Evolving our self-hosted offering and license model

Contact us Sign in Evolving our self-hosted offering and license model What you need to know about the upcoming changes to CockroachDB Enterprise arriving this…

August 15, 2024 at 10:13AM

via Instapaper

·cockroachlabs.com·
Evolving our self-hosted offering and license model
Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta
Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta

Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta

https://kubernetes.io/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/

Volumes in Kubernetes have been described by two attributes: their storage class, and their capacity. The storage class is an immutable property of the volume, while the capacity can be changed dynamically with volume resize.

This complicates vertical scaling of workloads with volumes. While cloud providers and storage vendors often offer volumes which allow specifying IO quality of service (Performance) parameters like IOPS or throughput and tuning them as workloads operate, Kubernetes has no API which allows changing them.

We are pleased to announce that the VolumeAttributesClass KEP, alpha since Kubernetes 1.29, will be beta in 1.31. This provides a generic, Kubernetes-native API for modifying volume parameters like provisioned IO.

Like all new volume features in Kubernetes, this API is implemented via the container storage interface (CSI). In addition to the VolumeAttributesClass feature gate, your provisioner-specific CSI driver must support the new ModifyVolume API which is the CSI side of this feature.

See the full documentation for all details. Here we show the common workflow.

Dynamically modifying volume attributes.

A VolumeAttributesClass is a cluster-scoped resource that specifies provisioner-specific attributes. These are created by the cluster administrator in the same way as storage classes. For example, a series of gold, silver and bronze volume attribute classes can be created for volumes with greater or lessor amounts of provisioned IO.

apiVersion: storage.k8s.io/v1alpha1 kind: VolumeAttributesClass metadata: name: silver driverName: your-csi-driver parameters: provisioned-iops: "500" provisioned-throughput: "50MiB/s" --- apiVersion: storage.k8s.io/v1alpha1 kind: VolumeAttributesClass metadata: name: gold driverName: your-csi-driver parameters: provisioned-iops: "10000" provisioned-throughput: "500MiB/s"

An attribute class is added to a PVC in much the same way as a storage class.

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pv-claim spec: storageClassName: any-storage-class volumeAttributesClassName: silver accessModes:

  • ReadWriteOnce resources: requests: storage: 64Gi

Unlike a storage class, the volume attributes class can be changed:

kubectl patch pvc test-pv-claim -p '{"spec": "volumeAttributesClassName": "gold"}'

Kubernetes will work with the CSI driver to update the attributes of the volume. The status of the PVC will track the current and desired attributes class. The PV resource will also be updated with the new volume attributes class which will be set to the currently active attributes of the PV.

Limitations with the beta

As a beta feature, there are still some features which are planned for GA but not yet present. The largest is quota support, see the KEP and discussion in sig-storage for details.

See the Kubernetes CSI driver list for up-to-date information of support for this feature in CSI drivers.

via Kubernetes Blog https://kubernetes.io/

August 14, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta
Open Model Initiative
Open Model Initiative
Open Model Initiative has 3 repositories available. Follow their code on GitHub.
·github.com·
Open Model Initiative
Announcing Karpenter 1.0 | Amazon Web Services
Announcing Karpenter 1.0 | Amazon Web Services
Introduction In November 2021, AWS announced the launch of v0.5 of Karpenter, “a new open source Kubernetes cluster auto scaling project.” Originally conceived as a flexible, dynamic, and high-performance alternative to the Kubernetes Cluster Autoscaler, in the nearly three years since then Karpenter has evolved substantially into a fully featured, Kubernetes native node lifecycle manager. […]
·aws.amazon.com·
Announcing Karpenter 1.0 | Amazon Web Services
SNMP alerted us when we were supposed to start shutting down servers when our ancient air handlers bit the dust at Pope AFB. I tracked variances to time of day. | Uncertainties and issues in using IPMI temperature data
SNMP alerted us when we were supposed to start shutting down servers when our ancient air handlers bit the dust at Pope AFB. I tracked variances to time of day. | Uncertainties and issues in using IPMI temperature data
Uncertainties and issues in using IPMI temperature data
·utcc.utoronto.ca·
SNMP alerted us when we were supposed to start shutting down servers when our ancient air handlers bit the dust at Pope AFB. I tracked variances to time of day. | Uncertainties and issues in using IPMI temperature data
Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache
Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache

Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache

https://kubernetes.io/blog/2024/08/15/consistent-read-from-cache-beta/

Kubernetes is renowned for its robust orchestration of containerized applications, but as clusters grow, the demands on the control plane can become a bottleneck. A key challenge has been ensuring strongly consistent reads from the etcd datastore, requiring resource-intensive quorum reads.

Today, the Kubernetes community is excited to announce a major improvement: consistent reads from cache, graduating to Beta in Kubernetes v1.31.

Why consistent reads matter

Consistent reads are essential for ensuring that Kubernetes components have an accurate view of the latest cluster state. Guaranteeing consistent reads is crucial for maintaining the accuracy and reliability of Kubernetes operations, enabling components to make informed decisions based on up-to-date information. In large-scale clusters, fetching and processing this data can be a performance bottleneck, especially for requests that involve filtering results. While Kubernetes can filter data by namespace directly within etcd, any other filtering by labels or field selectors requires the entire dataset to be fetched from etcd and then filtered in-memory by the Kubernetes API server. This is particularly impactful for components like the kubelet, which only needs to list pods scheduled to its node - but previously required the API Server and etcd to process all pods in the cluster.

The breakthrough: Caching with confidence

Kubernetes has long used a watch cache to optimize read operations. The watch cache stores a snapshot of the cluster state and receives updates through etcd watches. However, until now, it couldn't serve consistent reads directly, as there was no guarantee the cache was sufficiently up-to-date.

The consistent reads from cache feature addresses this by leveraging etcd's progress notifications mechanism. These notifications inform the watch cache about how current its data is compared to etcd. When a consistent read is requested, the system first checks if the watch cache is up-to-date. If the cache is not up-to-date, the system queries etcd for progress notifications until it's confirmed that the cache is sufficiently fresh. Once ready, the read is efficiently served directly from the cache, which can significantly improve performance, particularly in cases where it would require fetching a lot of data from etcd. This enables requests that filter data to be served from the cache, with only minimal metadata needing to be read from etcd.

Important Note: To benefit from this feature, your Kubernetes cluster must be running etcd version 3.4.31+ or 3.5.13+. For older etcd versions, Kubernetes will automatically fall back to serving consistent reads directly from etcd.

Performance gains you'll notice

This seemingly simple change has a profound impact on Kubernetes performance and scalability:

Reduced etcd Load: Kubernetes v1.31 can offload work from etcd, freeing up resources for other critical operations.

Lower Latency: Serving reads from cache is significantly faster than fetching and processing data from etcd. This translates to quicker responses for components, improving overall cluster responsiveness.

Improved Scalability: Large clusters with thousands of nodes and pods will see the most significant gains, as the reduction in etcd load allows the control plane to handle more requests without sacrificing performance.

5k Node Scalability Test Results: In recent scalability tests on 5,000 node clusters, enabling consistent reads from cache delivered impressive improvements:

30% reduction in kube-apiserver CPU usage

25% reduction in etcd CPU usage

Up to 3x reduction (from 5 seconds to 1.5 seconds) in 99th percentile pod LIST request latency

What's next?

With the graduation to beta, consistent reads from cache are enabled by default, offering a seamless performance boost to all Kubernetes users running a supported etcd version.

Our journey doesn't end here. Kubernetes community is actively exploring pagination support in the watch cache, which will unlock even more performance optimizations in the future.

Getting started

Upgrading to Kubernetes v1.31 and ensuring you are using etcd version 3.4.31+ or 3.5.13+ is the easiest way to experience the benefits of consistent reads from cache. If you have any questions or feedback, don't hesitate to reach out to the Kubernetes community.

Let us know how consistent reads from cache transforms your Kubernetes experience!

Special thanks to @ah8ad3 and @p0lyn0mial for their contributions to this feature!

via Kubernetes Blog https://kubernetes.io/

August 14, 2024 at 08:00PM

·kubernetes.io·
Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache
Next they’ll charge a fee to buy things through WebKit browsers | Apple says Patreon must switch to its billing system or risk removal from App Store | TechCrunch
Next they’ll charge a fee to buy things through WebKit browsers | Apple says Patreon must switch to its billing system or risk removal from App Store | TechCrunch
Apple has threatened to remove creator platform Patreon from the App Store if creators use unsupported third-party billing options or disable transactions
·techcrunch.com·
Next they’ll charge a fee to buy things through WebKit browsers | Apple says Patreon must switch to its billing system or risk removal from App Store | TechCrunch
Palo Alto Networks apologizes as sexist marketing misfires
Palo Alto Networks apologizes as sexist marketing misfires

Palo Alto Networks apologizes as sexist marketing misfires

If you attended the Black Hat conference in Vegas last week and found yourself over in Palo Alto Networks' corner of the event, you may have encountered a…

August 14, 2024 at 01:44PM

via Instapaper

·theregister.com·
Palo Alto Networks apologizes as sexist marketing misfires
Last Week in Kubernetes Development - Week Ending August 11 2024
Last Week in Kubernetes Development - Week Ending August 11 2024

Week Ending August 11, 2024

https://lwkd.info/2024/20240814

Developer News

It’s Release Week! Kubernetes 1.31 “Elli” is released, with many new features. In addition to the list of features in the main blog post, note that cgroups v1 is going into maintenance, several things have been removed (most notably Ceph in-tree driver), and the addition of lastTransitionTime for PVs. More 1.31 features below.

Steering Committee nominations are open.

The Kubernetes Contributor Summit is looking for artists to create designs. Registration and CfP is still open.

Release Schedule

Next Deadline: v1.31.0 release day, August 13th

Kubernetes 1.31 was released on August 13.

Patch releases are expected later this week.

Lesser-known 1.31 Features

These features didn’t make the 1.31 release blog, but are interesting to contributors:

4355: Coordinated Leader Elections

This Enhancement makes control plane leader elections function in a way that is compatible with upgrading one control plane component at a time, by keeping everyone on the old APIserver until everything else is upgraded. This should make for a smoother, and more reliable, upgrade experience. Alpha and opt-in only for 1.31.

4368: Job API managed-by mechanism

A small part of the MultiKueue initiative of the Kueue job manager, this enhancement adds tracking for which controller “owns” a job. While potentially useful for any multi-controller environment, the change is intended to make multi-cluster job scheduling possible. Alpha in 1.31.

4176 and 4622: HPC Features

Two features make Kubernetes more useful on bigger, beefier machines. We can spread hyperthreads across physical CPUs, making better use of high-core-count machines. And we can configure topology rules for more than eight NUMA nodes, supporting very high memory systems. 4176 is Alpha and 4622 is Beta in 1.31.

KEP of the Week

KEP 4420: Retry Generate Name

This KEP implements automated retry of generateName create requests when a name conflict occurs. Despite generating over 14 million possible names per prefix with a 5-character random suffix, conflicts are frequent, with a 50% chance after 5,000 names. Currently, a conflict triggers an HTTP 409 response, leaving it to clients to retry, which many fail to do, causing production issues.

This feature became Beta in 1.31.

Subprojects and Dependency Updates

containerd v1.6.35 regenerate UUID if state is empty in introspection service

prometheus v2.54.0 remote-Write: Version 2.0 experimental, plus metadata in WAL via feature flag metadata-wal-records; also v2.53.2

via Last Week in Kubernetes Development https://lwkd.info/

August 14, 2024 at 05:00PM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending August 11 2024
A good read for folks who are suddenly looking for a job or preparing to be | One Week Later: My Journey, Gratitude, and Tips for Standing Out in the Job Market
A good read for folks who are suddenly looking for a job or preparing to be | One Week Later: My Journey, Gratitude, and Tips for Standing Out in the Job Market
It's been a week since I made my initial post on August 2nd announcing my departure from AWS. If you haven't seen it yet, you can find it here: https://www.
·linkedin.com·
A good read for folks who are suddenly looking for a job or preparing to be | One Week Later: My Journey, Gratitude, and Tips for Standing Out in the Job Market
Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode
Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode

Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode

https://kubernetes.io/blog/2024/08/14/kubernetes-1-31-moving-cgroup-v1-support-maintenance-mode/

As Kubernetes continues to evolve and adapt to the changing landscape of container orchestration, the community has decided to move cgroup v1 support into maintenance mode in v1.31. This shift aligns with the broader industry's move towards cgroup v2, offering improved functionalities: including scalability and a more consistent interface. Before we dive into the consequences for Kubernetes, let's take a step back to understand what cgroups are and their significance in Linux.

Understanding cgroups

Control groups, or cgroups, are a Linux kernel feature that allows the allocation, prioritization, denial, and management of system resources (such as CPU, memory, disk I/O, and network bandwidth) among processes. This functionality is crucial for maintaining system performance and ensuring that no single process can monopolize system resources, which is especially important in multi-tenant environments.

There are two versions of cgroups: v1 and v2. While cgroup v1 provided sufficient capabilities for resource management, it had limitations that led to the development of cgroup v2. Cgroup v2 offers a more unified and consistent interface, on top of better resource control features.

Cgroups in Kubernetes

For Linux nodes, Kubernetes relies heavily on cgroups to manage and isolate the resources consumed by containers running in pods. Each container in Kubernetes is placed in its own cgroup, which allows Kubernetes to enforce resource limits, monitor usage, and ensure fair resource distribution among all containers.

How Kubernetes uses cgroups

Resource Allocation

Ensures that containers do not exceed their allocated CPU and memory limits.

Isolation

Isolates containers from each other to prevent resource contention.

Monitoring

Tracks resource usage for each container to provide insights and metrics.

Transitioning to Cgroup v2

The Linux community has been focusing on cgroup v2 for new features and improvements. Major Linux distributions and projects like systemd are transitioning towards cgroup v2. Using cgroup v2 provides several benefits over cgroupv1, such as Unified Hierarchy, Improved Interface, Better Resource Control, cgroup aware OOM killer, rootless support etc.

Given these advantages, Kubernetes is also making the move to embrace cgroup v2 more fully. However, this transition needs to be handled carefully to avoid disrupting existing workloads and to provide a smooth migration path for users.

Moving cgroup v1 support into maintenance mode

What does maintenance mode mean?

When cgroup v1 is placed into maintenance mode in Kubernetes, it means that:

Feature Freeze: No new features will be added to cgroup v1 support.

Security Fixes: Critical security fixes will still be provided.

Best-Effort Bug Fixes: Major bugs may be fixed if feasible, but some issues might remain unresolved.

Why move to maintenance mode?

The move to maintenance mode is driven by the need to stay in line with the broader ecosystem and to encourage the adoption of cgroup v2, which offers better performance, security, and usability. By transitioning cgroup v1 to maintenance mode, Kubernetes can focus on enhancing support for cgroup v2 and ensure it meets the needs of modern workloads. It's important to note that maintenance mode does not mean deprecation; cgroup v1 will continue to receive critical security fixes and major bug fixes as needed.

What this means for cluster administrators

Users currently relying on cgroup v1 are highly encouraged to plan for the transition to cgroup v2. This transition involves:

Upgrading Systems: Ensuring that the underlying operating systems and container runtimes support cgroup v2.

Testing Workloads: Verifying that workloads and applications function correctly with cgroup v2.

Further reading

Linux cgroups

Cgroup v2 in Kubernetes

Kubernetes 1.25: cgroup v2 graduates to GA

via Kubernetes Blog https://kubernetes.io/

August 13, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode
Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA
Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA

Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA

https://kubernetes.io/blog/2024/08/14/last-phase-transition-time-ga/

Announcing the graduation to General Availability (GA) of the PersistentVolume lastTransitionTime status field, in Kubernetes v1.31!

The Kubernetes SIG Storage team is excited to announce that the "PersistentVolumeLastPhaseTransitionTime" feature, introduced as an alpha in Kubernetes v1.28, has now reached GA status and is officially part of the Kubernetes v1.31 release. This enhancement helps Kubernetes users understand when a PersistentVolume transitions between different phases, allowing for more efficient and informed resource management.

For a v1.31 cluster, you can now assume that every PersistentVolume object has a .status.lastTransitionTime field, that holds a timestamp of when the volume last transitioned its phase. This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (Pending, Bound, or Released) after upgrading to Kubernetes v1.31.

What changed?

The API strategy for updating PersistentVolume objects has been modified to populate the .status.lastTransitionTime field with the current timestamp whenever a PersistentVolume transitions phases. Users are allowed to set this field manually if needed, but it will be overwritten when the PersistentVolume transitions phases again.

For more details, read about Phase transition timestamp in the Kubernetes documentation. You can also read the previous blog post announcing the feature as alpha in v1.28.

To provide feedback, join our Kubernetes Storage Special-Interest-Group (SIG) or participate in discussions on our public Slack channel.

via Kubernetes Blog https://kubernetes.io/

August 13, 2024 at 08:00PM

·kubernetes.io·
Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA
Inside the "3 Billion People" National Public Data Breach
Inside the "3 Billion People" National Public Data Breach
I decided to write this post because there's no concise way to explain the nuances of what's being described as one of the largest data breaches ever. Usually, it's easy to articulate a data breach; a service people provide their information to had someone snag it through an act of
·troyhunt.com·
Inside the "3 Billion People" National Public Data Breach
Kubernetes v1.31: Elli
Kubernetes v1.31: Elli

Kubernetes v1.31: Elli

https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/

Editors: Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith

Announcing the release of Kubernetes v1.31: Elli!

Similar to previous releases, the release of Kubernetes v1.31 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community. This release consists of 45 enhancements. Of those enhancements, 11 have graduated to Stable, 22 are entering Beta, and 12 have graduated to Alpha.

Release theme and logo

The Kubernetes v1.31 Release Theme is "Elli".

Kubernetes v1.31's Elli is a cute and joyful dog, with a heart of gold and a nice sailor's cap, as a playful wink to the huge and diverse family of Kubernetes contributors.

Kubernetes v1.31 marks the first release after the project has successfully celebrated its first 10 years. Kubernetes has come a very long way since its inception, and it's still moving towards exciting new directions with each release. After 10 years, it is awe-inspiring to reflect on the effort, dedication, skill, wit and tiring work of the countless Kubernetes contributors who have made this a reality.

And yet, despite the herculean effort needed to run the project, there is no shortage of people who show up, time and again, with enthusiasm, smiles and a sense of pride for contributing and being part of the community. This "spirit" that we see from new and old contributors alike is the sign of a vibrant community, a "joyful" community, if we might call it that.

Kubernetes v1.31's Elli is all about celebrating this wonderful spirit! Here's to the next decade of Kubernetes!

Highlights of features graduating to Stable

This is a selection of some of the improvements that are now stable following the v1.31 release.

AppAprmor support is now stable

Kubernetes support for AppArmor is now GA. Protect your containers using AppArmor by setting the appArmorProfile.type field in the container's securityContext. Note that before Kubernetes v1.30, AppArmor was controlled via annotations; starting in v1.30 it is controlled using fields. It is recommended that you should migrate away from using annotations and start using the appArmorProfile.type field.

To learn more read the AppArmor tutorial. This work was done as a part of KEP #24, by SIG Node.

Improved ingress connectivity reliability for kube-proxy

Kube-proxy improved ingress connectivity reliability is stable in v1.31. One of the common problems with load balancers in Kubernetes is the synchronization between the different components involved to avoid traffic drop. This feature implements a mechanism in kube-proxy for load balancers to do connection draining for terminating Nodes exposed by services of type: LoadBalancer and externalTrafficPolicy: Cluster and establish some best practices for cloud providers and Kubernetes load balancers implementations.

To use this feature, kube-proxy needs to run as default service proxy on the cluster and the load balancer needs to support connection draining. There are no specific changes required for using this feature, it has been enabled by default in kube-proxy since v1.30 and been promoted to stable in v1.31.

For more details about this feature please visit the Virtual IPs and Service Proxies documentation page.

This work was done as part of KEP #3836 by SIG Network.

Persistent Volume last phase transition time

Persistent Volume last phase transition time feature moved to GA in v1.31. This feature adds a PersistentVolumeStatus field which holds a timestamp of when a PersistentVolume last transitioned to a different phase. With this feature enabled, every PersistentVolume object will have a new field .status.lastTransitionTime, that holds a timestamp of when the volume last transitioned its phase. This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (Pending, Bound, or Released) after upgrading to Kubernetes v1.31. This allows you to measure time between when a PersistentVolume moves from Pending to Bound. This can be also useful for providing metrics and SLOs.

For more details about this feature please visit the PersistentVolume documentation page.

This work was done as a part of KEP #3762 by SIG Storage.

Highlights of features graduating to Beta

This is a selection of some of the improvements that are now beta following the v1.31 release.

nftables backend for kube-proxy

The nftables backend moves to beta in v1.31, behind the NFTablesProxyMode feature gate which is now enabled by default.

The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables. The nftables proxy mode is able to process changes to service endpoints faster and more efficiently than the iptables mode, and is also able to more efficiently process packets in the kernel (though this only becomes noticeable in clusters with tens of thousands of services).

As of Kubernetes v1.31, the nftables mode is still relatively new, and may not be compatible with all network plugins; consult the documentation for your network plugin. This proxy mode is only available on Linux nodes, and requires kernel 5.13 or later. Before migrating, note that some features, especially around NodePort services, are not implemented exactly the same in nftables mode as they are in iptables mode. Check the migration guide to see if you need to override the default configuration.

This work was done as part of KEP #3866 by SIG Network.

Changes to reclaim policy for PersistentVolumes

The Always Honor PersistentVolume Reclaim Policy feature has advanced to beta in Kubernetes v1.31. This enhancement ensures that the PersistentVolume (PV) reclaim policy is respected even after the associated PersistentVolumeClaim (PVC) is deleted, thereby preventing the leakage of volumes.

Prior to this feature, the reclaim policy linked to a PV could be disregarded under specific conditions, depending on whether the PV or PVC was deleted first. Consequently, the corresponding storage resource in the external infrastructure might not be removed, even if the reclaim policy was set to "Delete". This led to potential inconsistencies and resource leaks.

With the introduction of this feature, Kubernetes now guarantees that the "Delete" reclaim policy will be enforced, ensuring the deletion of the underlying storage object from the backend infrastructure, regardless of the deletion sequence of the PV and PVC.

This work was done as a part of KEP #2644 and by SIG Storage.

Bound service account token improvements

The ServiceAccountTokenNodeBinding feature is promoted to beta in v1.31. This feature allows requesting a token bound only to a node, not to a pod, which includes node information in claims in the token and validates the existence of the node when the token is used. For more information, read the bound service account tokens documentation.

This work was done as part of KEP #4193 by SIG Auth.

Multiple Service CIDRs

Support for clusters with multiple Service CIDRs moves to beta in v1.31 (disabled by default).

There are multiple components in a Kubernetes cluster that consume IP addresses: Nodes, Pods and Services. Nodes and Pods IP ranges can be dynamically changed because depend on the infrastructure or the network plugin respectively. However, Services IP ranges are defined during the cluster creation as a hardcoded flag in the kube-apiserver. IP exhaustion has been a problem for long lived or large clusters, as admins needed to expand, shrink or even replace entirely the assigned Service CIDR range. These operations were never supported natively and were performed via complex and delicate maintenance operations, often causing downtime on their clusters. This new feature allows users and cluster admins to dynamically modify Service CIDR ranges with zero downtime.

For more details about this feature please visit the Virtual IPs and Service Proxies documentation page.

This work was done as part of KEP #1880 by SIG Network.

Traffic distribution for Services

Traffic distribution for Services moves to beta in v1.31 and is enabled by default.

After several iterations on finding the best user experience and traffic engineering capabilities for Services networking, SIG Networking implemented the trafficDistribution field in the Service specification, which serves as a guideline for the underlying implementation to consider while making routing decisions.

For more details about this feature please read the 1.30 Release Blog or visit the Service documentation page.

This work was done as part of KEP #4444 by SIG Network.

Kubernetes VolumeAttributesClass ModifyVolume

VolumeAttributesClass API is moving to beta in v1.31. The VolumeAttributesClass provides a generic, Kubernetes-native API for modifying dynamically volume parameters like provisioned IO. This allows workloads to vertically scale their volumes on-line to balance cost and performance, if supported by their provider. This feature had been alpha since Kubernetes 1.29.

This work was done as a part of KEP #3751 and lead by SIG Storage.

New features in Alpha

This is a selection of some of the improvements that are now alpha following the v1.31 release.

New DRA APIs for better accelerators and other hardware management

Kubernetes v1.31 brings an updated dynamic resource allocation (DRA) API and design. The main focus in the update is on structured parameters because they make resource information and requests transparent to Kubernetes and clients and enable implementing features like cluster autoscaling. DRA support in the kubelet was updated such that version skew between kubelet and the control plane is possible. With structured parameters, the scheduler allocates ResourceClaims while scheduling a pod. Allocati

·kubernetes.io·
Kubernetes v1.31: Elli
Comfy Org
Comfy Org
Creators of ComfyUI. We are a team dedicated to iterate and improve ComfyUI, support the ComfyUI ecosystem with tools like node manager, node registry, cli, automated testing, and public documentation.
·comfy.org·
Comfy Org
Yamcs Mission Control
Yamcs Mission Control

Yamcs Mission Control

Yamcs is an open-source software framework for command and control of spacecrafts, satellites, payloads, ground stations and ground equipment. Telemetry…

August 13, 2024 at 09:55AM

via Instapaper

·yamcs.org·
Yamcs Mission Control
postgres.new: In-browser Postgres with an AI interface
postgres.new: In-browser Postgres with an AI interface

postgres.new: In-browser Postgres with an AI interface

Introducing postgres.new, the in-browser Postgres sandbox with AI assistance. With postgres.new, you can instantly spin up an unlimited number of Postgres…

August 13, 2024 at 09:09AM

via Instapaper

·supabase.com·
postgres.new: In-browser Postgres with an AI interface
Master Your New Laptop Setup: Tools, Configs (dot Files), and Secrets!
Master Your New Laptop Setup: Tools, Configs (dot Files), and Secrets!

Master Your New Laptop Setup: Tools, Configs (dot Files), and Secrets!

I just got a new machine and need to set it up. In this video, I'll show you how to quickly configure your terminal and install essential tools using scripts and Devbox. We'll cover cloning a Git repo, running installation scripts, and syncing dot files with Stow for a seamless setup across multiple devices. Learn how to keep your configurations consistent and avoid leaking secrets. Perfect for developers who want a streamlined setup process. Watch as I transform a fresh machine into a fully configured development environment in no time!

DeveloperSetup #TerminalConfiguration #DotFilesManagement

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/terminal/master-your-new-laptop-setup-tools-configs-and-secrets 🔗 stow: https://gnu.org/software/stow 🎬 Nix for Everyone: Unleash Devbox for Simplified Development: https://youtu.be/WiFLtcBvGMU 🎬 Secrets Made My Life Miserable - Consume Secrets Easily With Teller: https://youtu.be/Vcjz-YM3uLQ

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to dot Files 04:10 How to Manage dot Files?

via YouTube https://www.youtube.com/watch?v=FH083GOJoIM

·youtube.com·
Master Your New Laptop Setup: Tools, Configs (dot Files), and Secrets!
Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control
Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control

Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control

https://kubernetes.io/blog/2024/08/12/feature-gates-in-client-go/

Kubernetes components use on-off switches called feature gates to manage the risk of adding a new feature. The feature gate mechanism is what enables incremental graduation of a feature through the stages Alpha, Beta, and GA.

Kubernetes components, such as kube-controller-manager and kube-scheduler, use the client-go library to interact with the API. The same library is used across the Kubernetes ecosystem to build controllers, tools, webhooks, and more. client-go now includes its own feature gating mechanism, giving developers and cluster administrators more control over how they adopt client features.

To learn more about feature gates in Kubernetes, visit Feature Gates.

Motivation

In the absence of client-go feature gates, each new feature separated feature availability from enablement in its own way, if at all. Some features were enabled by updating to a newer version of client-go. Others needed to be actively configured in each program that used them. A few were configurable at runtime using environment variables. Consuming a feature-gated functionality exposed by the kube-apiserver sometimes required a client-side fallback mechanism to remain compatible with servers that don’t support the functionality due to their age or configuration. In cases where issues were discovered in these fallback mechanisms, mitigation required updating to a fixed version of client-go or rolling back.

None of these approaches offer good support for enabling a feature by default in some, but not all, programs that consume client-go. Instead of enabling a new feature at first only for a single component, a change in the default setting immediately affects the default for all Kubernetes components, which broadens the blast radius significantly.

Feature gates in client-go

To address these challenges, substantial client-go features will be phased in using the new feature gate mechanism. It will allow developers and users to enable or disable features in a way that will be familiar to anyone who has experience with feature gates in the Kubernetes components.

Out of the box, simply by using a recent version of client-go, this offers several benefits.

For people who use software built with client-go:

Early adopters can enable a default-off client-go feature on a per-process basis.

Misbehaving features can be disabled without building a new binary.

The state of all known client-go feature gates is logged, allowing users to inspect it.

For people who develop software built with client-go:

By default, client-go feature gate overrides are read from environment variables. If a bug is found in a client-go feature, users will be able to disable it without waiting for a new release.

Developers can replace the default environment-variable-based overrides in a program to change defaults, read overrides from another source, or disable runtime overrides completely. The Kubernetes components use this customizability to integrate client-go feature gates with the existing --feature-gates command-line flag, feature enablement metrics, and logging.

Overriding client-go feature gates

Note: This describes the default method for overriding client-go feature gates at runtime. It can be disabled or customized by the developer of a particular program. In Kubernetes components, client-go feature gate overrides are controlled by the --feature-gates flag.

Features of client-go can be enabled or disabled by setting environment variables prefixed with KUBE_FEATURE. For example, to enable a feature named MyFeature, set the environment variable as follows:

KUBE_FEATURE_MyFeature=true

To disable the feature, set the environment variable to false:

KUBE_FEATURE_MyFeature=false

Note: Environment variables are case-sensitive on some operating systems. Therefore, KUBE_FEATURE_MyFeature and KUBE_FEATURE_MYFEATURE would be considered two different variables.

Customizing client-go feature gates

The default environment-variable based mechanism for feature gate overrides can be sufficient for many programs in the Kubernetes ecosystem, and requires no special integration. Programs that require different behavior can replace it with their own custom feature gate provider. This allows a program to do things like force-disable a feature that is known to work poorly, read feature gates directly from a remote configuration service, or accept feature gate overrides through command-line options.

The Kubernetes components replace client-go’s default feature gate provider with a shim to the existing Kubernetes feature gate provider. For all practical purposes, client-go feature gates are treated the same as other Kubernetes feature gates: they are wired to the --feature-gates command-line flag, included in feature enablement metrics, and logged on startup.

To replace the default feature gate provider, implement the Gates interface and call ReplaceFeatureGates at package initialization time, as in this simple example:

import ( “k8s.io/client-go/features” )

type AlwaysEnabledGates struct{}

func (AlwaysEnabledGates) Enabled(features.Feature) bool { return true }

func init() { features.ReplaceFeatureGates(AlwaysEnabledGates{}) }

Implementations that need the complete list of defined client-go features can get it by implementing the Registry interface and calling AddFeaturesToExistingFeatureGates. For a complete example, refer to the usage within Kubernetes.

Summary

With the introduction of feature gates in client-go v1.30, rolling out a new client-go feature has become safer and easier. Users and developers can control the pace of their own adoption of client-go features. The work of Kubernetes contributors is streamlined by having a common mechanism for graduating features that span both sides of the Kubernetes API boundary.

Special shoutout to @sttts and @deads2k for their help in shaping this feature.

via Kubernetes Blog https://kubernetes.io/

August 11, 2024 at 08:00PM

·kubernetes.io·
Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control
Meteor Shower PSA
Meteor Shower PSA
If you hold the meteor too long, it may imprint on you and form a contact binary, making reintroduction to space difficult.
·xkcd.com·
Meteor Shower PSA