
1_r/devopsish
Blog: Introducing Hydrophone
https://www.kubernetes.dev/blog/2024/05/23/introducing-hydrophone/
In the ever-changing landscape of Kubernetes, ensuring that clusters operate as intended is essential. This is where conformance testing becomes crucial, verifying that a Kubernetes cluster meets the required standards set by the community. Today, we’re thrilled to introduce Hydrophone, a lightweight runner designed to streamline Kubernetes tests using the official conformance images released by the Kubernetes release team.
Simplified Kubernetes testing with Hydrophone
Hydrophone’s design philosophy centers around ease of use. By starting the conformance image as a pod within the conformance namespace, Hydrophone waits for the tests to conclude, then prints and exports the results. This approach offers a hassle-free method for running either individual tests or the entire Conformance Test Suite.
Key features of Hydrophone
Ease of Use: Designed with simplicity in mind, Hydrophone provides an easy-to-use tool for conducting Kubernetes conformance tests.
Official Conformance Images: It leverages the official conformance images from the Kubernetes Release Team, ensuring that you’re using the most up-to-date and reliable resources for testing.
Flexible Test Execution: Whether you need to run a single test, the entire Conformance Test Suite, or anything in between.
Streamlining Kubernetes conformance with Hydrophone
In the Kubernetes world, where providers like EKS, Rancher, and k3s offer diverse environments, ensuring consistent experiences is vital. This consistency is anchored in conformance testing, which validates whether these environments adhere to Kubernetes community standards. Historically, this validation has either been cumbersome or requires third-party tools. Hydrophone offers a simple, single binary tool that streamlines running these essential conformance tests. It’s designed to be user-friendly, allowing for straightforward validation of Kubernetes clusters against community benchmarks, ensuring providers can offer a certified, consistent service.
Hydrophone doesn’t aim to replace the myriad of Kubernetes testing frameworks out there but rather to complement them. It focuses on facilitating conformance tests efficiently, without developing new tests or heavy integration with other tools.
Getting started with Hydrophone
Installing Hydrophone is straightforward. You need a Go development environment; once you have that:
go install sigs.k8s.io/hydrophone@latest
Running hydrophone by default will:
Create a pod, and supporting resources in the conformance namespace on your cluster.
Execute the entire conformance test suite for the cluster version you’re running.
Output the test results and export e2e.log and junit_01.xml needed for conformance validation.
There are supporting flags to specify which tests to run, which to skip, the cluster you’re targeting and much more!
Community and contributions
The Hydrophone project is part of SIG Testing and open to the community for bugs, feature requests, and other contributions. You can engage with the project maintainers via Kubernetes Slack channels
hydrophone, #sig-testing, and #k8s-conformance, or by filing an issue against the
repository. We’re also active in the Kubernetes SIG-Testing and SIG-Release Mailing Lists. We encourage pull requests and discussions to make Hydrophone even better.
Join us in simplifying Kubernetes testing
In SIG Testing, we believe Hydrophone will be a valuable tool for anyone looking to validate the conformance of their Kubernetes clusters easily. Whether you’re developing new features, or testing your application, Hydrophone offers an efficient testing experience.
via Kubernetes Contributors – Contributor Blog https://www.kubernetes.dev/blog/
May 22, 2024 at 08:00PM
Week Ending May 19, 2024
https://lwkd.info/2024/20240522
Developer News
CNCF TAG Environmental Sustainability is looking for best practice recommendations. MiniKube has fast 5-question survey.
The CNCF has shared a statement about Kubecon NA 2024 and Utah law.
Celebrate Kubernetes’ 10th anniversary on June 6! Contributors are planning events all over the world for our first decade.
Release Schedule
Next Deadline: Production Readiness Freeze, June 6th, 2024
Release Team Shadow notifications will be sent out by Wednesday, May 22, 2024, at the latest.
SIG Leads and contributors: time to decide which Enhancements are making 1.31.
Patch releases 1.30.1, 1.29.5, 1.28.10, and 1.27.14 are available. This is largely a bigfix release, including patch some 1.30 regressions, and an golang update.
KEP of the Week
KEP 4568: Resilient watchcache initialization
This KEP mitigates the issues that can lead to an overload of the of kube-apiserver and etcd during initialization or reinitialization of the watchcache layer.
The changes reduce the number of requests during initialization, by introducing a new PostStartHook that waits for watchcache of all builtin resources to be initialized first. It also implements rejecting hanging watches with a Too Many Requests 429 result. Other changes include adjusting which lists are delegated to etcd.
This KEP is tracked to be promoted to beta in the upcoming 1.31 release.
Other Merges
Reversion: DisableServiceLinks admission controller in favor of Mutating Admission Policies
Reverted Reversion: revert removing legacy cloud providers from staging, and then revert the reversion, so they are actually gone from staging, at least until next week
Ignore the grace period when terminating for resource outage or force-terminate
All scheduler profiles have access to all waiting pods
Add a whole set of “keeper flags” for kubectl debug
Prevent running with a errorful encryption config
Don’t require finalizer role for cronjobs, for backwards compatibility
Kubeadm: allow patching coredns deployment, use etcd’s livez & readyz, get image pull policy from UpgradeConfiguration
Move the remote CRI code to cri-client
Warn when the reflector bookmark wasn’t received
Test Improvements: swap stress tests
Deprecated
Remove ENABLE_CLIENT_GO_WATCH_LIST_ALPHA variable from reflector
Version Updates
go to 1.21.10 in release versions, and 1.22.3 in v1.31
Subprojects and Dependency Updates
cri-o to v1.30.1: fixed kubelet image garbage collection
kops to v1.29: (experimental) support for OpenTelemetry
minikube to v1.33.1: fix cilium pods failing to start-up
kind to v0.23.0: initial limited support for nerdctl and kube-proxy nftables mode
kubebuilder to v3.15.0: discontinue Kube RBAC Proxy in Default Kubebuilder Scaffolding
containerd to v1.7.17: handle unsupported config versions
via Last Week in Kubernetes Development https://lwkd.info/
May 22, 2024 at 03:00PM
About Winamp - Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.
Winamp • May 16, 2024 • Press Release Winamp has announced that on 24 September 2024, the application's source code will be open to developers worldwide. Winamp…
May 22, 2024 at 01:31PM
via Instapaper
semgrep/semgrep: Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.
Code scanning at ludicrous speed. This repository contains the source code for Semgrep OSS (open-source software). Semgrep OSS is a fast, open-source, static…
May 22, 2024 at 09:33AM
via Instapaper
Open VSX Registry
v0.15.4 DocumentationStatusWorking Group SponsorAboutPublish We've established a working group devoted entirely to the operation, maintenance, and promotion of…
May 20, 2024 at 12:44PM
via Instapaper
Debug Kubernetes with eBPF and Inspektor Gadget
Unlock the power of eBPF for Kubernetes debugging with Inspektor Gadget. We'll demonstrate how to install and use Inspektor Gadget, and walk through practical examples to troubleshoot and gain insights into your cluster issues.
eBPF #KubernetesDebugging #InspektorGadget
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/observability/inspektor-gadget-kubernetes-debugging-ebpf 🔗 Inspektor Gadget: https://inspektor-gadget.io
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to Inspektor Gadget 03:27 Inspect Kubernetes with Inspektor Gadget
via YouTube https://www.youtube.com/watch?v=6cwb3xNcqqI
Red Hat prunes middleware to invest in AI
Exclusive Red Hat is slowing or stopping development of some of its middleware software, a situation which could result in some staff layoffs. The Register has…
May 20, 2024 at 11:00AM
via Instapaper
Completing the largest migration in Kubernetes history
https://kubernetes.io/blog/2024/05/20/completing-cloud-provider-migration/
Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations (KEP-2395). While these integrations were instrumental in Kubernetes' early development and growth, their removal was driven by two key factors: the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish Kubernetes as a truly vendor-neutral platform.
After many releases, we're thrilled to announce that all cloud provider integrations have been successfully migrated from the core Kubernetes repository to external plugins. In addition to achieving our initial objectives, we've also significantly streamlined Kubernetes by removing roughly 1.5 million lines of code and reducing the binary sizes of core components by approximately 40%.
This migration was a complex and long-running effort due to the numerous impacted components and the critical code paths that relied on the built-in integrations for the five initial cloud providers: Google Cloud, AWS, Azure, OpenStack, and vSphere. To successfully complete this migration, we had to build four new subsystems from the ground up:
Cloud controller manager (KEP-2392)
API server network proxy (KEP-1281)
kubelet credential provider plugins (KEP-2133)
Storage migration to use CSI (KEP-625)
Each subsystem was critical to achieve full feature parity with built-in capabilities and required several releases to bring each subsystem to GA-level maturity with a safe and reliable migration path. More on each subsystem below.
Cloud controller manager
The cloud controller manager was the first external component introduced in this effort, replacing functionality within the kube-controller-manager and kubelet that directly interacted with cloud APIs. This essential component is responsible for initializing nodes by applying metadata labels that indicate the cloud region and zone a Node is running on, as well as IP addresses that are only known to the cloud provider. Additionally, it runs the service controller, which is responsible for provisioning cloud load balancers for Services of type LoadBalancer.
To learn more, read Cloud Controller Manager in the Kubernetes documentation.
API server network proxy
The API Server Network Proxy project, initiated in 2018 in collaboration with SIG API Machinery, aimed to replace the SSH tunneler functionality within the kube-apiserver. This tunneler had been used to securely proxy traffic between the Kubernetes control plane and nodes, but it heavily relied on provider-specific implementation details embedded in the kube-apiserver to establish these SSH tunnels.
Now, the API Server Network Proxy is a GA-level extension point within the kube-apiserver. It offers a generic proxying mechanism that can route traffic from the API server to nodes through a secure proxy, eliminating the need for the API server to have any knowledge of the specific cloud provider it is running on. This project also introduced the Konnectivity project, which has seen growing adoption in production environments.
You can learn more about the API Server Network Proxy from its README.
Credential provider plugins for the kubelet
The Kubelet credential provider plugin was developed to replace the kubelet's built-in functionality for dynamically fetching credentials for image registries hosted on Google Cloud, AWS, or Azure. The legacy capability was convenient as it allowed the kubelet to seamlessly retrieve short-lived tokens for pulling images from GCR, ECR, or ACR. However, like other areas of Kubernetes, supporting this required the kubelet to have specific knowledge of different cloud environments and APIs.
Introduced in 2019, the credential provider plugin mechanism offers a generic extension point for the kubelet to execute plugin binaries that dynamically provide credentials for images hosted on various clouds. This extensibility expands the kubelet's capabilities to fetch short-lived tokens beyond the initial three cloud providers.
To learn more, read kubelet credential provider for authenticated image pulls.
Storage plugin migration from in-tree to CSI
The Container Storage Interface (CSI) is a control plane standard for managing block and file storage systems in Kubernetes and other container orchestrators that went GA in 1.13. It was designed to replace the in-tree volume plugins built directly into Kubernetes with drivers that can run as Pods within the Kubernetes cluster. These drivers communicate with kube-controller-manager storage controllers via the Kubernetes API, and with kubelet through a local gRPC endpoint. Now there are over 100 CSI drivers available across all major cloud and storage vendors, making stateful workloads in Kubernetes a reality.
However, a major challenge remained on how to handle all the existing users of in-tree volume APIs. To retain API backwards compatibility, we built an API translation layer into our controllers that will convert the in-tree volume API into the equivalent CSI API. This allowed us to redirect all storage operations to the CSI driver, paving the way for us to remove the code for the built-in volume plugins without removing the API.
You can learn more about In-tree Storage migration in Kubernetes In-Tree to CSI Volume Migration Moves to Beta.
What's next?
This migration has been the primary focus for SIG Cloud Provider over the past few years. With this significant milestone achieved, we will be shifting our efforts towards exploring new and innovative ways for Kubernetes to better integrate with cloud providers, leveraging the external subsystems we've built over the years. This includes making Kubernetes smarter in hybrid environments where nodes in the cluster can run on both public and private clouds, as well as providing better tools and frameworks for developers of external providers to simplify and streamline their integration efforts.
With all the new features, tools, and frameworks being planned, SIG Cloud Provider is not forgetting about the other side of the equation: testing. Another area of focus for the SIG's future activities is the improvement of cloud controller testing to include more providers. The ultimate goal of this effort being to create a testing framework that will include as many providers as possible so that we give the Kubernetes community the highest levels of confidence about their Kubernetes environments.
If you're using a version of Kubernetes older than v1.29 and haven't migrated to an external cloud provider yet, we recommend checking out our previous blog post Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components.It provides detailed information on the changes we've made and offers guidance on how to migrate to an external provider. Starting in v1.31, in-tree cloud providers will be permanently disabled and removed from core Kubernetes components.
If you’re interested in contributing, come join our bi-weekly SIG meetings!
via Kubernetes Blog https://kubernetes.io/
May 19, 2024 at 08:00PM