1_r/devopsish

1_r/devopsish

54514 bookmarks
Custom sorting
Not soon enough | Linux 6.10 Is Disabling NFS v2 Client Support By Default
Not soon enough | Linux 6.10 Is Disabling NFS v2 Client Support By Default
Following the NFS server changes from a few days ago for Linux 6.10 that brought optimizations and prepping for the new 'nfsdctl' utility, the Network File System client changes have been submitted and merged for this new kernel.
·phoronix.com·
Not soon enough | Linux 6.10 Is Disabling NFS v2 Client Support By Default
Google fixes eighth actively exploited Chrome zero-day this year
Google fixes eighth actively exploited Chrome zero-day this year
Google has released a new emergency security update to address the eighth zero-day vulnerability in Chrome browser confirmed to be actively exploited in the wild.
·bleepingcomputer.com·
Google fixes eighth actively exploited Chrome zero-day this year
Apple Music 100 Best Albums
Apple Music 100 Best Albums
Welcome to 100 Best Albums, our definitive list of the greatest albums ever made. Sign up to stream full tracks or add these albums to your library.
·100best.music.apple.com·
Apple Music 100 Best Albums
Blog: Introducing Hydrophone
Blog: Introducing Hydrophone

Blog: Introducing Hydrophone

https://www.kubernetes.dev/blog/2024/05/23/introducing-hydrophone/

In the ever-changing landscape of Kubernetes, ensuring that clusters operate as intended is essential. This is where conformance testing becomes crucial, verifying that a Kubernetes cluster meets the required standards set by the community. Today, we’re thrilled to introduce Hydrophone, a lightweight runner designed to streamline Kubernetes tests using the official conformance images released by the Kubernetes release team.

Simplified Kubernetes testing with Hydrophone

Hydrophone’s design philosophy centers around ease of use. By starting the conformance image as a pod within the conformance namespace, Hydrophone waits for the tests to conclude, then prints and exports the results. This approach offers a hassle-free method for running either individual tests or the entire Conformance Test Suite.

Key features of Hydrophone

Ease of Use: Designed with simplicity in mind, Hydrophone provides an easy-to-use tool for conducting Kubernetes conformance tests.

Official Conformance Images: It leverages the official conformance images from the Kubernetes Release Team, ensuring that you’re using the most up-to-date and reliable resources for testing.

Flexible Test Execution: Whether you need to run a single test, the entire Conformance Test Suite, or anything in between.

Streamlining Kubernetes conformance with Hydrophone

In the Kubernetes world, where providers like EKS, Rancher, and k3s offer diverse environments, ensuring consistent experiences is vital. This consistency is anchored in conformance testing, which validates whether these environments adhere to Kubernetes community standards. Historically, this validation has either been cumbersome or requires third-party tools. Hydrophone offers a simple, single binary tool that streamlines running these essential conformance tests. It’s designed to be user-friendly, allowing for straightforward validation of Kubernetes clusters against community benchmarks, ensuring providers can offer a certified, consistent service.

Hydrophone doesn’t aim to replace the myriad of Kubernetes testing frameworks out there but rather to complement them. It focuses on facilitating conformance tests efficiently, without developing new tests or heavy integration with other tools.

Getting started with Hydrophone

Installing Hydrophone is straightforward. You need a Go development environment; once you have that:

go install sigs.k8s.io/hydrophone@latest

Running hydrophone by default will:

Create a pod, and supporting resources in the conformance namespace on your cluster.

Execute the entire conformance test suite for the cluster version you’re running.

Output the test results and export e2e.log and junit_01.xml needed for conformance validation.

There are supporting flags to specify which tests to run, which to skip, the cluster you’re targeting and much more!

Community and contributions

The Hydrophone project is part of SIG Testing and open to the community for bugs, feature requests, and other contributions. You can engage with the project maintainers via Kubernetes Slack channels

hydrophone, #sig-testing, and #k8s-conformance, or by filing an issue against the

repository. We’re also active in the Kubernetes SIG-Testing and SIG-Release Mailing Lists. We encourage pull requests and discussions to make Hydrophone even better.

Join us in simplifying Kubernetes testing

In SIG Testing, we believe Hydrophone will be a valuable tool for anyone looking to validate the conformance of their Kubernetes clusters easily. Whether you’re developing new features, or testing your application, Hydrophone offers an efficient testing experience.

via Kubernetes Contributors – Contributor Blog https://www.kubernetes.dev/blog/

May 22, 2024 at 08:00PM

·kubernetes.dev·
Blog: Introducing Hydrophone
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
Decades ago, Kris Hansen showed 3M that its PFAS chemicals were in people’s bodies. Her bosses halted her work. As the EPA now forces the removal of the chemicals from drinking water, she wrestles with the secrets that 3M kept from her and the world.
·propublica.org·
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
Last Week in Kubernetes Development - Week Ending May 19 2024
Last Week in Kubernetes Development - Week Ending May 19 2024

Week Ending May 19, 2024

https://lwkd.info/2024/20240522

Developer News

CNCF TAG Environmental Sustainability is looking for best practice recommendations. MiniKube has fast 5-question survey.

The CNCF has shared a statement about Kubecon NA 2024 and Utah law.

Celebrate Kubernetes’ 10th anniversary on June 6! Contributors are planning events all over the world for our first decade.

Release Schedule

Next Deadline: Production Readiness Freeze, June 6th, 2024

Release Team Shadow notifications will be sent out by Wednesday, May 22, 2024, at the latest.

SIG Leads and contributors: time to decide which Enhancements are making 1.31.

Patch releases 1.30.1, 1.29.5, 1.28.10, and 1.27.14 are available. This is largely a bigfix release, including patch some 1.30 regressions, and an golang update.

KEP of the Week

KEP 4568: Resilient watchcache initialization

This KEP mitigates the issues that can lead to an overload of the of kube-apiserver and etcd during initialization or reinitialization of the watchcache layer.

The changes reduce the number of requests during initialization, by introducing a new PostStartHook that waits for watchcache of all builtin resources to be initialized first. It also implements rejecting hanging watches with a Too Many Requests 429 result. Other changes include adjusting which lists are delegated to etcd.

This KEP is tracked to be promoted to beta in the upcoming 1.31 release.

Other Merges

Reversion: DisableServiceLinks admission controller in favor of Mutating Admission Policies

Reverted Reversion: revert removing legacy cloud providers from staging, and then revert the reversion, so they are actually gone from staging, at least until next week

Ignore the grace period when terminating for resource outage or force-terminate

All scheduler profiles have access to all waiting pods

Add a whole set of “keeper flags” for kubectl debug

Prevent running with a errorful encryption config

Don’t require finalizer role for cronjobs, for backwards compatibility

Kubeadm: allow patching coredns deployment, use etcd’s livez & readyz, get image pull policy from UpgradeConfiguration

Move the remote CRI code to cri-client

Warn when the reflector bookmark wasn’t received

Test Improvements: swap stress tests

Deprecated

Remove ENABLE_CLIENT_GO_WATCH_LIST_ALPHA variable from reflector

Version Updates

go to 1.21.10 in release versions, and 1.22.3 in v1.31

Subprojects and Dependency Updates

cri-o to v1.30.1: fixed kubelet image garbage collection

kops to v1.29: (experimental) support for OpenTelemetry

minikube to v1.33.1: fix cilium pods failing to start-up

kind to v0.23.0: initial limited support for nerdctl and kube-proxy nftables mode

kubebuilder to v3.15.0: discontinue Kube RBAC Proxy in Default Kubebuilder Scaffolding

containerd to v1.7.17: handle unsupported config versions

via Last Week in Kubernetes Development https://lwkd.info/

May 22, 2024 at 03:00PM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending May 19 2024
About Winamp - Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.
About Winamp - Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.

About Winamp - Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.

Winamp • May 16, 2024 • Press Release Winamp has announced that on 24 September 2024, the application's source code will be open to developers worldwide. Winamp…

May 22, 2024 at 01:31PM

via Instapaper

·about.winamp.com·
About Winamp - Winamp has announced that it is opening up its source code to enable collaborative development of its legendary player for Windows.
semgrep/semgrep: Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.
semgrep/semgrep: Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.

semgrep/semgrep: Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.

Code scanning at ludicrous speed. This repository contains the source code for Semgrep OSS (open-source software). Semgrep OSS is a fast, open-source, static…

May 22, 2024 at 09:33AM

via Instapaper

·github.com·
semgrep/semgrep: Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.
QNAP QTS zero-day in Share feature gets public RCE exploit
QNAP QTS zero-day in Share feature gets public RCE exploit
An extensive security audit of QNAP QTS, the operating system for the company's NAS products, has uncovered fifteen vulnerabilities of varying severity, with eleven remaining unfixed.
·bleepingcomputer.com·
QNAP QTS zero-day in Share feature gets public RCE exploit
Hey, Let’s Embed an SRE in that Dev Team!
Hey, Let’s Embed an SRE in that Dev Team!
Get most of the benefit of a fully staffed team for a fraction of the cost? Looks great on paper.
·blog.mangoteque.com·
Hey, Let’s Embed an SRE in that Dev Team!
Now do hospitals! | Detecting Advanced Persistent Threats (APTs) in Financial Systems with eBPF
Now do hospitals! | Detecting Advanced Persistent Threats (APTs) in Financial Systems with eBPF
Explore how eBPF enhances security in Kubernetes for financial systems by detecting Advanced Persistent Threats (APTs). Learn about network monitoring, system call tracking, and application behaviour analysis to protect sensitive data with deep visibility and robust protection.
·kubelog.io·
Now do hospitals! | Detecting Advanced Persistent Threats (APTs) in Financial Systems with eBPF
I’m excited for the super smooth transition | Bluefin 2.7.0 - Bluefin and Aurora - Universal Blue
I’m excited for the super smooth transition | Bluefin 2.7.0 - Bluefin and Aurora - Universal Blue
We cut a new release today and spun new ISOs. Here’s all the changes. If you’re on a Framework laptop this includes all the hardware enablement out of the box. It’s kind of impossible to keep things a secret in OSS so no announcement on the Framework support yet, in the meantime enjoy it and please keep filing issues as you find them, thanks! 2.7.0 (2024-05-16) Features [This was reverted] Add back xwayland video bridge to all images (#1264) (91a67ad) add configure-grub just command (#1206) (c...
·universal-blue.discourse.group·
I’m excited for the super smooth transition | Bluefin 2.7.0 - Bluefin and Aurora - Universal Blue
VirusTotal/yara-x
VirusTotal/yara-x
A rewrite of YARA in Rust.
·github.com·
VirusTotal/yara-x
YARA is dead, long live YARA-X
YARA is dead, long live YARA-X
For over 15 years, YARA has been growing and evolving until it became an indispensable tool in every malware researcher's toolbox. Througho...
·blog.virustotal.com·
YARA is dead, long live YARA-X
Open VSX Registry
Open VSX Registry

Open VSX Registry

v0.15.4 DocumentationStatusWorking Group SponsorAboutPublish We've established a working group devoted entirely to the operation, maintenance, and promotion of…

May 20, 2024 at 12:44PM

via Instapaper

·open-vsx.org·
Open VSX Registry
Debug Kubernetes with eBPF and Inspektor Gadget
Debug Kubernetes with eBPF and Inspektor Gadget

Debug Kubernetes with eBPF and Inspektor Gadget

Unlock the power of eBPF for Kubernetes debugging with Inspektor Gadget. We'll demonstrate how to install and use Inspektor Gadget, and walk through practical examples to troubleshoot and gain insights into your cluster issues.

eBPF #KubernetesDebugging #InspektorGadget

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/observability/inspektor-gadget-kubernetes-debugging-ebpf 🔗 Inspektor Gadget: https://inspektor-gadget.io

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to Inspektor Gadget 03:27 Inspect Kubernetes with Inspektor Gadget

via YouTube https://www.youtube.com/watch?v=6cwb3xNcqqI

·youtube.com·
Debug Kubernetes with eBPF and Inspektor Gadget
Red Hat prunes middleware to invest in AI
Red Hat prunes middleware to invest in AI

Red Hat prunes middleware to invest in AI

Exclusive Red Hat is slowing or stopping development of some of its middleware software, a situation which could result in some staff layoffs. The Register has…

May 20, 2024 at 11:00AM

via Instapaper

·theregister.com·
Red Hat prunes middleware to invest in AI
Completing the largest migration in Kubernetes history
Completing the largest migration in Kubernetes history

Completing the largest migration in Kubernetes history

https://kubernetes.io/blog/2024/05/20/completing-cloud-provider-migration/

Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations (KEP-2395). While these integrations were instrumental in Kubernetes' early development and growth, their removal was driven by two key factors: the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish Kubernetes as a truly vendor-neutral platform.

After many releases, we're thrilled to announce that all cloud provider integrations have been successfully migrated from the core Kubernetes repository to external plugins. In addition to achieving our initial objectives, we've also significantly streamlined Kubernetes by removing roughly 1.5 million lines of code and reducing the binary sizes of core components by approximately 40%.

This migration was a complex and long-running effort due to the numerous impacted components and the critical code paths that relied on the built-in integrations for the five initial cloud providers: Google Cloud, AWS, Azure, OpenStack, and vSphere. To successfully complete this migration, we had to build four new subsystems from the ground up:

Cloud controller manager (KEP-2392)

API server network proxy (KEP-1281)

kubelet credential provider plugins (KEP-2133)

Storage migration to use CSI (KEP-625)

Each subsystem was critical to achieve full feature parity with built-in capabilities and required several releases to bring each subsystem to GA-level maturity with a safe and reliable migration path. More on each subsystem below.

Cloud controller manager

The cloud controller manager was the first external component introduced in this effort, replacing functionality within the kube-controller-manager and kubelet that directly interacted with cloud APIs. This essential component is responsible for initializing nodes by applying metadata labels that indicate the cloud region and zone a Node is running on, as well as IP addresses that are only known to the cloud provider. Additionally, it runs the service controller, which is responsible for provisioning cloud load balancers for Services of type LoadBalancer.

To learn more, read Cloud Controller Manager in the Kubernetes documentation.

API server network proxy

The API Server Network Proxy project, initiated in 2018 in collaboration with SIG API Machinery, aimed to replace the SSH tunneler functionality within the kube-apiserver. This tunneler had been used to securely proxy traffic between the Kubernetes control plane and nodes, but it heavily relied on provider-specific implementation details embedded in the kube-apiserver to establish these SSH tunnels.

Now, the API Server Network Proxy is a GA-level extension point within the kube-apiserver. It offers a generic proxying mechanism that can route traffic from the API server to nodes through a secure proxy, eliminating the need for the API server to have any knowledge of the specific cloud provider it is running on. This project also introduced the Konnectivity project, which has seen growing adoption in production environments.

You can learn more about the API Server Network Proxy from its README.

Credential provider plugins for the kubelet

The Kubelet credential provider plugin was developed to replace the kubelet's built-in functionality for dynamically fetching credentials for image registries hosted on Google Cloud, AWS, or Azure. The legacy capability was convenient as it allowed the kubelet to seamlessly retrieve short-lived tokens for pulling images from GCR, ECR, or ACR. However, like other areas of Kubernetes, supporting this required the kubelet to have specific knowledge of different cloud environments and APIs.

Introduced in 2019, the credential provider plugin mechanism offers a generic extension point for the kubelet to execute plugin binaries that dynamically provide credentials for images hosted on various clouds. This extensibility expands the kubelet's capabilities to fetch short-lived tokens beyond the initial three cloud providers.

To learn more, read kubelet credential provider for authenticated image pulls.

Storage plugin migration from in-tree to CSI

The Container Storage Interface (CSI) is a control plane standard for managing block and file storage systems in Kubernetes and other container orchestrators that went GA in 1.13. It was designed to replace the in-tree volume plugins built directly into Kubernetes with drivers that can run as Pods within the Kubernetes cluster. These drivers communicate with kube-controller-manager storage controllers via the Kubernetes API, and with kubelet through a local gRPC endpoint. Now there are over 100 CSI drivers available across all major cloud and storage vendors, making stateful workloads in Kubernetes a reality.

However, a major challenge remained on how to handle all the existing users of in-tree volume APIs. To retain API backwards compatibility, we built an API translation layer into our controllers that will convert the in-tree volume API into the equivalent CSI API. This allowed us to redirect all storage operations to the CSI driver, paving the way for us to remove the code for the built-in volume plugins without removing the API.

You can learn more about In-tree Storage migration in Kubernetes In-Tree to CSI Volume Migration Moves to Beta.

What's next?

This migration has been the primary focus for SIG Cloud Provider over the past few years. With this significant milestone achieved, we will be shifting our efforts towards exploring new and innovative ways for Kubernetes to better integrate with cloud providers, leveraging the external subsystems we've built over the years. This includes making Kubernetes smarter in hybrid environments where nodes in the cluster can run on both public and private clouds, as well as providing better tools and frameworks for developers of external providers to simplify and streamline their integration efforts.

With all the new features, tools, and frameworks being planned, SIG Cloud Provider is not forgetting about the other side of the equation: testing. Another area of focus for the SIG's future activities is the improvement of cloud controller testing to include more providers. The ultimate goal of this effort being to create a testing framework that will include as many providers as possible so that we give the Kubernetes community the highest levels of confidence about their Kubernetes environments.

If you're using a version of Kubernetes older than v1.29 and haven't migrated to an external cloud provider yet, we recommend checking out our previous blog post Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components.It provides detailed information on the changes we've made and offers guidance on how to migrate to an external provider. Starting in v1.31, in-tree cloud providers will be permanently disabled and removed from core Kubernetes components.

If you’re interested in contributing, come join our bi-weekly SIG meetings!

via Kubernetes Blog https://kubernetes.io/

May 19, 2024 at 08:00PM

·kubernetes.io·
Completing the largest migration in Kubernetes history
Installing Bluefin onto a Framework Laptop 16 - Framework Laptops - Universal Blue
Installing Bluefin onto a Framework Laptop 16 - Framework Laptops - Universal Blue
Download the Framework Laptop image of Project Bluefin. Make sure you select Intel or AMD depending on the mainboard in your device: Create a USB stick using Fedora Media Writer (Windows or Mac or Linux) Insert your USB drive (8GB or larger). Note that it will be reformatted, so make sure you are ok with erasing any data that is on it. After installing Fedora Media Writer, run it. Choose Select .iso file, browse to bluefin-gts.iso and select it. Click the Write button. Once the USB drive...
·universal-blue.discourse.group·
Installing Bluefin onto a Framework Laptop 16 - Framework Laptops - Universal Blue