1_r/devopsish

1_r/devopsish

54499 bookmarks
Custom sorting
omnibor/bomsh
omnibor/bomsh

omnibor/bomsh

Bomsh is collection of tools to explore the OmniBOR idea. It includes the below tools: Bombash: a BASH-based shell to generate OmniBOR artifact trees for software.

Tags:

via Pocket https://github.com/omnibor/bomsh

March 04, 2024 at 01:27PM

·github.com·
omnibor/bomsh
omnibor/omnibor-rs
omnibor/omnibor-rs

omnibor/omnibor-rs

This repository contains two Rust crates. To use either crate, please review their respective README.md files: Artifact IDs enable anyone to identify and cross-reference information for software artifacts without a central authority.

Tags:

via Pocket https://github.com/omnibor/omnibor-rs

March 04, 2024 at 01:26PM

·github.com·
omnibor/omnibor-rs
OmniBOR
OmniBOR
A draft standard for communicating a cryptographic record of build inputs for software artifacts.
·github.com·
OmniBOR
Return-to-office initiatives or stealth layoffs? Why not both?
Return-to-office initiatives or stealth layoffs? Why not both?
Dell has recently been accused of forcing people to quit by requiring them to return to the office unnecessarily. It’s far from the only company to use this tactic.
·computerworld.com·
Return-to-office initiatives or stealth layoffs? Why not both?
AI hiring tools may be filtering out the best job applicants
AI hiring tools may be filtering out the best job applicants
As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.
·bbc.com·
AI hiring tools may be filtering out the best job applicants
AI: The Difference Between Open and Open Source
AI: The Difference Between Open and Open Source
There is a pattern in AI that has become clear. Every few weeks – or more accurately at this point, days – there is a release of a new AI model. The majority of these models are not released under an actual open source license, but instead under licensing that imposes varying restrictions based on
·redmonk.com·
AI: The Difference Between Open and Open Source
awslabs/llrt: LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
awslabs/llrt: LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications. - awslabs/llrt
·github.com·
awslabs/llrt: LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
Platform Engineering: Orchestrating Applications, Platforms, and Infrastructure
Platform Engineering: Orchestrating Applications, Platforms, and Infrastructure
We all know that when the good people at Gartner highlight a topic as a strategic technology trend for the coming year, everyone will pay more attention – especially within the enterprise ecosystem. The topic of platform engineering feels no different. This article explores the three layers of modern platform building and platform engineering: application choreography, platform orchestration, and infrastructure composition.
·syntasso.io·
Platform Engineering: Orchestrating Applications, Platforms, and Infrastructure
US FTC: Price fixing by algorithm is still price fixing
US FTC: Price fixing by algorithm is still price fixing
Landlords and property managers can’t collude on rental pricing. Using new technology to do it doesn’t change that antitrust fundamental. Regardless of the industry you’re in, if your business uses an algorithm to determine prices, a brief filed by the FTC and the Department of Justice offers a helpful guideline for antitrust compliance: your algorithm can’t do anything that would be illegal if done by a real person.
·ftc.gov·
US FTC: Price fixing by algorithm is still price fixing
Employees are happier and you have a wider pool of applicants if you don’t restrict workers to living close to the office and coming in regularly? Who could have predicted this outcome? 🙃
Employees are happier and you have a wider pool of applicants if you don’t restrict workers to living close to the office and coming in regularly? Who could have predicted this outcome? 🙃
Employees are happier and you have a wider pool of applicants if you don’t restrict workers to living close to the office and coming in regularly? Who could have predicted this outcome? 🙃 pic.twitter.com/j5q7RZfLR8— Dare Obasanjo🐀 (@Carnage4Life) March 2, 2024
·x.com·
Employees are happier and you have a wider pool of applicants if you don’t restrict workers to living close to the office and coming in regularly? Who could have predicted this outcome? 🙃
Elon Musk sues OpenAI and Sam Altman over 'betrayal' of nonprofit AI mission | TechCrunch
Elon Musk sues OpenAI and Sam Altman over 'betrayal' of nonprofit AI mission | TechCrunch
Elon Musk has sued OpenAI, its co-founders Sam Altman and Greg Brockman, and its affiliated entities, alleging the ChatGPT makers have breached their Elon Musk sued OpenAI, its co-founders Sam Altman and Greg Brockman and affiliated entities for allegedly breaching their original contractual agreements.
·techcrunch.com·
Elon Musk sues OpenAI and Sam Altman over 'betrayal' of nonprofit AI mission | TechCrunch
Don’t let them call you an imposter — Betty Junod
Don’t let them call you an imposter — Betty Junod
In recent years, I’ve lost count on how many times I have had the imposter syndrome talk with someone. People at all levels of experience would mention having imposter syndrome and at first I simply acknowledged their experience, then I would encourage them of their capabilities, and most recently,
·bettyjunod.com·
Don’t let them call you an imposter — Betty Junod
Starship: Cross-Shell Prompt
Starship: Cross-Shell Prompt
Starship is the minimal, blazing fast, and extremely customizable prompt for any shell! Shows the information you need, while staying sleek and minimal. Quick installation available for Bash, Fish, ZSH, Ion, Tcsh, Elvish, Nu, Xonsh, Cmd, and Powershell.
·starship.rs·
Starship: Cross-Shell Prompt
The KDE desktop gets an overhaul with Plasma 6
The KDE desktop gets an overhaul with Plasma 6
It's been nearly 10 years since KDE Plasma 5, which is the last major release of the desktop. On February 28 the project announced its "mega release" of KDE Plasma 6, KDE Frameworks 6, and KDE Gear 24.02 — all based on the Qt 6 development framework. This release focuses heavily on migrating to Wayland, and aspires to be a seamless upgrade for the user while improving performance, security, and support for newer hardware. For developers, a lot of work has gone into removing deprecated frameworks and decreasing dependencies to make it easier to write applications targeting KDE.
·lwn.net·
The KDE desktop gets an overhaul with Plasma 6
Spotlight on SIG Cloud Provider
Spotlight on SIG Cloud Provider

Spotlight on SIG Cloud Provider

https://kubernetes.io/blog/2024/03/01/sig-cloud-provider-spotlight-2024/

Author: Arujjwal Negi

One of the most popular ways developers use Kubernetes-related services is via cloud providers, but have you ever wondered how cloud providers can do that? How does this whole process of integration of Kubernetes to various cloud providers happen? To answer that, let's put the spotlight on SIG Cloud Provider.

SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is their responsibility to configure cluster components to enable cloud provider integrations.

In this blog of the SIG Spotlight series, Arujjwal Negi interviews Michael McCune (Red Hat), also known as elmiko, co-chair of SIG Cloud Provider, to give us an insight into the workings of this group.

Introduction

Arujjwal: Let's start by getting to know you. Can you give us a small intro about yourself and how you got into Kubernetes?

Michael: Hi, I’m Michael McCune, most people around the community call me by my handle, elmiko. I’ve been a software developer for a long time now (Windows 3.1 was popular when I started!), and I’ve been involved with open-source software for most of my career. I first got involved with Kubernetes as a developer of machine learning and data science applications; the team I was on at the time was creating tutorials and examples to demonstrate the use of technologies like Apache Spark on Kubernetes. That said, I’ve been interested in distributed systems for many years and when an opportunity arose to join a team working directly on Kubernetes, I jumped at it!

Functioning and working

Arujjwal: Can you give us an insight into what SIG Cloud Provider does and how it functions?

Michael: SIG Cloud Provider was formed to help ensure that Kubernetes provides a neutral integration point for all infrastructure providers. Our largest task to date has been the extraction and migration of in-tree cloud controllers to out-of-tree components. The SIG meets regularly to discuss progress and upcoming tasks and also to answer questions and bugs that arise. Additionally, we act as a coordination point for cloud provider subprojects such as the cloud provider framework, specific cloud controller implementations, and the Konnectivity proxy project.

Arujjwal: After going through the project README, I learned that SIG Cloud Provider works with the integration of Kubernetes with cloud providers. How does this whole process go?

Michael: One of the most common ways to run Kubernetes is by deploying it to a cloud environment (AWS, Azure, GCP, etc). Frequently, the cloud infrastructures have features that enhance the performance of Kubernetes, for example, by providing elastic load balancing for Service objects. To ensure that cloud-specific services can be consistently consumed by Kubernetes, the Kubernetes community has created cloud controllers to address these integration points. Cloud providers can create their own controllers either by using the framework maintained by the SIG or by following the API guides defined in the Kubernetes code and documentation. One thing I would like to point out is that SIG Cloud Provider does not deal with the lifecycle of nodes in a Kubernetes cluster; for those types of topics, SIG Cluster Lifecycle and the Cluster API project are more appropriate venues.

Important subprojects

Arujjwal: There are a lot of subprojects within this SIG. Can you highlight some of the most important ones and what job they do?

Michael: I think the two most important subprojects today are the cloud provider framework and the extraction/migration project. The cloud provider framework is a common library to help infrastructure integrators build a cloud controller for their infrastructure. This project is most frequently the starting point for new people coming to the SIG. The extraction and migration project is the other big subproject and a large part of why the framework exists. A little history might help explain further: for a long time, Kubernetes needed some integration with the underlying infrastructure, not necessarily to add features but to be aware of cloud events like instance termination. The cloud provider integrations were built into the Kubernetes code tree, and thus the term "in-tree" was created (check out this article on the topic for more info). The activity of maintaining provider-specific code in the main Kubernetes source tree was considered undesirable by the community. The community’s decision inspired the creation of the extraction and migration project to remove the "in-tree" cloud controllers in favor of "out-of-tree" components.

Arujjwal: What makes [the cloud provider framework] a good place to start? Does it have consistent good beginner work? What kind?

Michael: I feel that the cloud provider framework is a good place to start as it encodes the community’s preferred practices for cloud controller managers and, as such, will give a newcomer a strong understanding of how and what the managers do. Unfortunately, there is not a consistent stream of beginner work on this component; this is due in part to the mature nature of the framework and that of the individual providers as well. For folks who are interested in getting more involved, having some Go language knowledge is good and also having an understanding of how at least one cloud API (e.g., AWS, Azure, GCP) works is also beneficial. In my personal opinion, being a newcomer to SIG Cloud Provider can be challenging as most of the code around this project deals directly with specific cloud provider interactions. My best advice to people wanting to do more work on cloud providers is to grow your familiarity with one or two cloud APIs, then look for open issues on the controller managers for those clouds, and always communicate with the other contributors as much as possible.

Accomplishments

Arujjwal: Can you share about an accomplishment(s) of the SIG that you are proud of?

Michael: Since I joined the SIG, more than a year ago, we have made great progress in advancing the extraction and migration subproject. We have moved from an alpha status on the defining KEP to a beta status and are inching ever closer to removing the old provider code from the Kubernetes source tree. I've been really proud to see the active engagement from our community members and to see the progress we have made towards extraction. I have a feeling that, within the next few releases, we will see the final removal of the in-tree cloud controllers and the completion of the subproject.

Advice for new contributors

Arujjwal: Is there any suggestion or advice for new contributors on how they can start at SIG Cloud Provider?

Michael: This is a tricky question in my opinion. SIG Cloud Provider is focused on the code pieces that integrate between Kubernetes and an underlying infrastructure. It is very common, but not necessary, for members of the SIG to be representing a cloud provider in an official capacity. I recommend that anyone interested in this part of Kubernetes should come to an SIG meeting to see how we operate and also to study the cloud provider framework project. We have some interesting ideas for future work, such as a common testing framework, that will cut across all cloud providers and will be a great opportunity for anyone looking to expand their Kubernetes involvement.

Arujjwal: Are there any specific skills you're looking for that we should highlight? To give you an example from our own [SIG ContribEx] (https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md): if you're an expert in Hugo, we can always use some help with k8s.dev!

Michael: The SIG is currently working through the final phases of our extraction and migration process, but we are looking toward the future and starting to plan what will come next. One of the big topics that the SIG has discussed is testing. Currently, we do not have a generic common set of tests that can be exercised by each cloud provider to confirm the behaviour of their controller manager. If you are an expert in Ginkgo and the Kubetest framework, we could probably use your help in designing and implementing the new tests.

This is where the conversation ends. I hope this gave you some insights about SIG Cloud Provider's aim and working. This is just the tip of the iceberg. To know more and get involved with SIG Cloud Provider, try attending their meetings here.

via Kubernetes Blog https://kubernetes.io/

February 29, 2024 at 07:00PM

·kubernetes.io·
Spotlight on SIG Cloud Provider
Blog: Spotlight on SIG Cloud Provider
Blog: Spotlight on SIG Cloud Provider

Blog: Spotlight on SIG Cloud Provider

https://www.kubernetes.dev/blog/2024/03/01/sig-cloud-provider-spotlight-2024/

One of the most popular ways developers use Kubernetes-related services is via cloud providers, but have you ever wondered how cloud providers can do that? How does this whole process of integration of Kubernetes to various cloud providers happen? To answer that, let’s put the spotlight on SIG Cloud Provider.

SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is their responsibility to configure cluster components to enable cloud provider integrations.

In this blog of the SIG Spotlight series, Arujjwal Negi interviews Michael McCune (Red Hat), also known as elmiko, co-chair of SIG Cloud Provider, to give us an insight into the workings of this group.

Introduction

Arujjwal: Let’s start by getting to know you. Can you give us a small intro about yourself and how you got into Kubernetes?

Michael: Hi, I’m Michael McCune, most people around the community call me by my handle, elmiko. I’ve been a software developer for a long time now (Windows 3.1 was popular when I started!), and I’ve been involved with open-source software for most of my career. I first got involved with Kubernetes as a developer of machine learning and data science applications; the team I was on at the time was creating tutorials and examples to demonstrate the use of technologies like Apache Spark on Kubernetes. That said, I’ve been interested in distributed systems for many years and when an opportunity arose to join a team working directly on Kubernetes, I jumped at it!

Functioning and working

Arujjwal: Can you give us an insight into what SIG Cloud Provider does and how it functions?

Michael: SIG Cloud Provider was formed to help ensure that Kubernetes provides a neutral integration point for all infrastructure providers. Our largest task to date has been the extraction and migration of in-tree cloud controllers to out-of-tree components. The SIG meets regularly to discuss progress and upcoming tasks and also to answer questions and bugs that arise. Additionally, we act as a coordination point for cloud provider subprojects such as the cloud provider framework, specific cloud controller implementations, and the Konnectivity proxy project.

Arujjwal: After going through the project README, I learned that SIG Cloud Provider works with the integration of Kubernetes with cloud providers. How does this whole process go?

Michael: One of the most common ways to run Kubernetes is by deploying it to a cloud environment (AWS, Azure, GCP, etc). Frequently, the cloud infrastructures have features that enhance the performance of Kubernetes, for example, by providing elastic load balancing for Service objects. To ensure that cloud-specific services can be consistently consumed by Kubernetes, the Kubernetes community has created cloud controllers to address these integration points. Cloud providers can create their own controllers either by using the framework maintained by the SIG or by following the API guides defined in the Kubernetes code and documentation. One thing I would like to point out is that SIG Cloud Provider does not deal with the lifecycle of nodes in a Kubernetes cluster; for those types of topics, SIG Cluster Lifecycle and the Cluster API project are more appropriate venues.

Important subprojects

Arujjwal: There are a lot of subprojects within this SIG. Can you highlight some of the most important ones and what job they do?

Michael: I think the two most important subprojects today are the cloud provider framework and the extraction/migration project. The cloud provider framework is a common library to help infrastructure integrators build a cloud controller for their infrastructure. This project is most frequently the starting point for new people coming to the SIG. The extraction and migration project is the other big subproject and a large part of why the framework exists. A little history might help explain further: for a long time, Kubernetes needed some integration with the underlying infrastructure, not necessarily to add features but to be aware of cloud events like instance termination. The cloud provider integrations were built into the Kubernetes code tree, and thus the term “in-tree” was created (check out this article on the topic for more info). The activity of maintaining provider-specific code in the main Kubernetes source tree was considered undesirable by the community. The community’s decision inspired the creation of the extraction and migration project to remove the “in-tree” cloud controllers in favor of “out-of-tree” components.

Arujjwal: What makes [the cloud provider framework] a good place to start? Does it have consistent good beginner work? What kind?

Michael: I feel that the cloud provider framework is a good place to start as it encodes the community’s preferred practices for cloud controller managers and, as such, will give a newcomer a strong understanding of how and what the managers do. Unfortunately, there is not a consistent stream of beginner work on this component; this is due in part to the mature nature of the framework and that of the individual providers as well. For folks who are interested in getting more involved, having some Go language knowledge is good and also having an understanding of how at least one cloud API (e.g., AWS, Azure, GCP) works is also beneficial. In my personal opinion, being a newcomer to SIG Cloud Provider can be challenging as most of the code around this project deals directly with specific cloud provider interactions. My best advice to people wanting to do more work on cloud providers is to grow your familiarity with one or two cloud APIs, then look for open issues on the controller managers for those clouds, and always communicate with the other contributors as much as possible.

Accomplishments

Arujjwal: Can you share about an accomplishment(s) of the SIG that you are proud of?

Michael: Since I joined the SIG, more than a year ago, we have made great progress in advancing the extraction and migration subproject. We have moved from an alpha status on the defining KEP to a beta status and are inching ever closer to removing the old provider code from the Kubernetes source tree. I’ve been really proud to see the active engagement from our community members and to see the progress we have made towards extraction. I have a feeling that, within the next few releases, we will see the final removal of the in-tree cloud controllers and the completion of the subproject.

Advice for new contributors

Arujjwal: Is there any suggestion or advice for new contributors on how they can start at SIG Cloud Provider?

Michael: This is a tricky question in my opinion. SIG Cloud Provider is focused on the code pieces that integrate between Kubernetes and an underlying infrastructure. It is very common, but not necessary, for members of the SIG to be representing a cloud provider in an official capacity. I recommend that anyone interested in this part of Kubernetes should come to an SIG meeting to see how we operate and also to study the cloud provider framework project. We have some interesting ideas for future work, such as a common testing framework, that will cut across all cloud providers and will be a great opportunity for anyone looking to expand their Kubernetes involvement.

Arujjwal: Are there any specific skills you’re looking for that we should highlight? To give you an example from our own [SIG ContribEx] (https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md): if you’re an expert in Hugo, we can always use some help with k8s.dev!

Michael: The SIG is currently working through the final phases of our extraction and migration process, but we are looking toward the future and starting to plan what will come next. One of the big topics that the SIG has discussed is testing. Currently, we do not have a generic common set of tests that can be exercised by each cloud provider to confirm the behaviour of their controller manager. If you are an expert in Ginkgo and the Kubetest framework, we could probably use your help in designing and implementing the new tests.

This is where the conversation ends. I hope this gave you some insights about SIG Cloud Provider’s aim and working. This is just the tip of the iceberg. To know more and get involved with SIG Cloud Provider, try attending their meetings here.

via Kubernetes Contributors – Contributor Blog https://www.kubernetes.dev/blog/

February 29, 2024 at 07:00PM

·kubernetes.dev·
Blog: Spotlight on SIG Cloud Provider
The disappointing tea.xyz
The disappointing tea.xyz
I got a pull request from "tea.xyz" related individual and unraveled a mess of a disappointing project.
·connortumbleson.com·
The disappointing tea.xyz
DevOps Toolkit - Getting Started with Crossplane: A Glimpse Into the Future | Tutorial (Part 1) - https://www.youtube.com/watch?v=bBpE0rfE-JM
DevOps Toolkit - Getting Started with Crossplane: A Glimpse Into the Future | Tutorial (Part 1) - https://www.youtube.com/watch?v=bBpE0rfE-JM

Getting Started with Crossplane: A Glimpse Into the Future | Tutorial (Part 1)

Embark on your journey to Control Plane mastery with the first installment of our Crossplane tutorial series. In this introductory ...

via YouTube https://www.youtube.com/watch?v=bBpE0rfE-JM

·youtube.com·
DevOps Toolkit - Getting Started with Crossplane: A Glimpse Into the Future | Tutorial (Part 1) - https://www.youtube.com/watch?v=bBpE0rfE-JM
Issue 52 I am Sam's low-level culpability
Issue 52 I am Sam's low-level culpability
Bitcoin prices are spiking. Are we in for another round of crypto mania? Also, Sam Bankman-Fried doesn't want to go to jail for 100 years.
·citationneeded.news·
Issue 52 I am Sam's low-level culpability
The Billionaire-Fueled Lobbying Group Behind the State Bills to Ban Basic Income Experiments
The Billionaire-Fueled Lobbying Group Behind the State Bills to Ban Basic Income Experiments
The Foundation for Government Accountability - a Florida-based lobbying group backed by the richest 1% - is working to get basic income experiments banned by state legislators across the U.S. As a well-known quote often wrongly attributed to Mahatma Ghandi says, “First they ignore you, then they laugh at
·scottsantens.com·
The Billionaire-Fueled Lobbying Group Behind the State Bills to Ban Basic Income Experiments
Positive Affirmations for Site Reliability Engineers
Positive Affirmations for Site Reliability Engineers
there will never be another outage again // featuring Alexis Gay: https://www.instagram.com/yayalexisgay // unlock exclusive deleted scenes: https://www.patreon.com/KRAZAM // https://instagram.com/krazam.tv // https://twitter.com/krazamtv
·youtu.be·
Positive Affirmations for Site Reliability Engineers
Week Ending February 25 2024
Week Ending February 25 2024

Week Ending February 25, 2024

http://lwkd.info/2024/20240227

Developer News

There’s an updated Kubernetes v1.30 State of the Release and Important Deadlines

Contributor Summit Paris schedule is live. If you have a new topic, time to suggest an unconference item.

Release Schedule

Next Deadline: CODE Freeze Begins, March 5th

Kubernetes v1.30.0-alpha.3 is live!

The Code Freeze milestone for the Kubernetes 1.30 release cycle is approaching rapidly. Have all your necessary changes been submitted? Following this, there’s the usual release countdown: submit documentation PRs by February 26th, publish deprecation blog on Thursday, and conclude testing freeze and documentation finalization next week. Once we enter Code Freeze, please promptly address any test failures. Questions can be answered on #SIG-release.

Featured PRs

122589: promote contextual logging to beta, enabled by default

Adding contextual logging to Kubernetes has been a long, long road. Removing the tree-wide dependency on klog required refactoring code all over Kubernetes, which took the time of hundreds of contributors. This PR enables contextual logging by default since many components and clients now support it.

123157: Add SELinuxMount feature gate

Use this one neat SELinux trick for faster relabeling of volumes! Users with SELinux=enforcing currently suffer latency due to needing to relabel all content on volume so that pods can access it. SELinuxMount instead mounts the volume using -o context=XYZ which skips the recursive walk. Currently alpha; needs tests, disabled by default.

KEP of the Week

KEP-4176: A new static policy to prefer allocating cores from different CPUs on the same socket

This KEP proposes a new CPU Manager Static Policy Option called distribute-cpus-across-cores to prefer allocating CPUs from different physical cores on the same socket. This will be similar to the distribute-cpus-across-numa policy option, but it proposes to spread CPU allocations instead of packing them together. Such a policy is useful if an application wants to avoid being a noisy neighbor with itself, but still want to take advantage of the L2 cache.

Other Merges

kubeadm certs check-expiration JSON and YAML support

Improved skip node search in specific cases for scheduler performance

kube_codegen `–plural-exceptions and improved API type detection

Fix for kubeadm upgrade mounting a new device.

Flag to disable force detach behaviour in kube-controller-manager

Added the MutatingAdmissionPolicy flag to enable mutation policy in admission chain

kubelet adds an image field to the image_garbage_collected_total metric

Promotions

LoadBalancerIPMode to beta

ImageMaximumGCAge to beta

NewVolumeManagerReconstruction to GA

Version Updates

sampleapiserver is now v1.29.2

golangci-init v1.56.0 to support Go 1.22

Subprojects and Dependency Updates

prometheus to 2.50.0: automated memory limit handling, multiple PromQL improvements

cri-o to v1.29.2: Enable automatic OpenTelemetry instrumentation of ttrpc calls to NRI plugins; Also released v1.28.4 and v1.27.4

via Last Week in Kubernetes Development http://lwkd.info/

February 27, 2024 at 05:00PM

·lwkd.info·
Week Ending February 25 2024