1_r/devopsish

1_r/devopsish

54497 bookmarks
Custom sorting
Starting an Open Source Project
Starting an Open Source Project

Starting an Open Source Project

Open Source Guides Starting an Open Source Project Once a company has participated in open source communities long enough to build a reputation, it’s in a…

May 1, 2024 at 10:06AM

via Instapaper

·linuxfoundation.org·
Starting an Open Source Project
Software vendors dump open source go for the cash grab
Software vendors dump open source go for the cash grab

Software vendors dump open source, go for the cash grab

Essentially, all software is built using open source. By Synopsys’ count, 96% of all codebases contain open-source software. Lately, though, there’s been a very…

May 1, 2024 at 09:42AM

via Instapaper

·computerworld.com·
Software vendors dump open source go for the cash grab
The end of vendor-backed open source?
The end of vendor-backed open source?

The end of vendor-backed open source?

Software vendors dump open source, go for the cash grab

May 1, 2024 at 09:42AM

via Instapaper

·infoworld.com·
The end of vendor-backed open source?
Container Runtime Interface streaming explained
Container Runtime Interface streaming explained

Container Runtime Interface streaming explained

https://kubernetes.io/blog/2024/05/01/cri-streaming-explained/

The Kubernetes Container Runtime Interface (CRI) acts as the main connection between the kubelet and the Container Runtime. Those runtimes have to provide a gRPC server which has to fulfill a Kubernetes defined Protocol Buffer interface. This API definition evolves over time, for example when contributors add new features or fields are going to become deprecated.

In this blog post, I'd like to dive into the functionality and history of three extraordinary Remote Procedure Calls (RPCs), which are truly outstanding in terms of how they work: Exec, Attach and PortForward.

Exec can be used to run dedicated commands within the container and stream the output to a client like kubectl or crictl. It also allows interaction with that process using standard input (stdin), for example if users want to run a new shell instance within an existing workload.

Attach streams the output of the currently running process via standard I/O from the container to the client and also allows interaction with them. This is particularly useful if users want to see what is going on in the container and be able to interact with the process.

PortForward can be utilized to forward a port from the host to the container to be able to interact with it using third party network tools. This allows it to bypass Kubernetes services for a certain workload and interact with its network interface.

What is so special about them?

All RPCs of the CRI either use the gRPC unary calls for communication or the server side streaming feature (only GetContainerEvents right now). This means that mainly all RPCs retrieve a single client request and have to return a single server response. The same applies to Exec, Attach, and PortForward, where their protocol definition looks like this:

// Exec prepares a streaming endpoint to execute a command in the container. rpc Exec(ExecRequest) returns (ExecResponse) {}

// Attach prepares a streaming endpoint to attach to a running container. rpc Attach(AttachRequest) returns (AttachResponse) {}

// PortForward prepares a streaming endpoint to forward ports from a PodSandbox. rpc PortForward(PortForwardRequest) returns (PortForwardResponse) {}

The requests carry everything required to allow the server to do the work, for example, the ContainerId or command (Cmd) to be run in case of Exec. More interestingly, all of their responses only contain a url:

message ExecResponse { // Fully qualified URL of the exec streaming server. string url = 1; }

message AttachResponse { // Fully qualified URL of the attach streaming server. string url = 1; }

message PortForwardResponse { // Fully qualified URL of the port-forward streaming server. string url = 1; }

Why is it implemented like that? Well, the original design document for those RPCs even predates Kubernetes Enhancements Proposals (KEPs) and was originally outlined back in 2016. The kubelet had a native implementation for Exec, Attach, and PortForward before the initiative to bring the functionality to the CRI started. Before that, everything was bound to Docker or the later abandoned container runtime rkt.

The CRI related design document also elaborates on the option to use native RPC streaming for exec, attach, and port forward. The downsides outweighed this approach: the kubelet would still create a network bottleneck and future runtimes would not be free in choosing the server implementation details. Also, another option that the Kubelet implements a portable, runtime-agnostic solution has been abandoned over the final one, because this would mean another project to maintain which nevertheless would be runtime dependent.

This means, that the basic flow for Exec, Attach and PortForward was proposed to look like this:

sequenceDiagram participant crictl participant kubectl participant API as API Server participant kubelet participant runtime as Container Runtime participant streaming as Streaming Server alt Client alternatives Note over kubelet,runtime: Container Runtime Interface (CRI) kubectl->>API: exec, attach, port-forward API->>kubelet: kubelet->>runtime: Exec, Attach, PortForward else Note over crictl,runtime: Container Runtime Interface (CRI) crictl->>runtime: Exec, Attach, PortForward end runtime->>streaming: New Session streaming->>runtime: HTTP endpoint (URL) alt Client alternatives runtime->>kubelet: Response URL kubelet->>API: API-->>streaming: Connection upgrade (SPDY or WebSocket) streaming-)API: Stream data API-)kubectl: Stream data else runtime->>crictl: Response URL crictl-->>streaming: Connection upgrade (SPDY or WebSocket) streaming-)crictl: Stream data end

Clients like crictl or the kubelet (via kubectl) request a new exec, attach or port forward session from the runtime using the gRPC interface. The runtime implements a streaming server that also manages the active sessions. This streaming server provides an HTTP endpoint for the client to connect to. The client upgrades the connection to use the SPDY streaming protocol or (in the future) to a WebSocket connection and starts to stream the data back and forth.

This implementation allows runtimes to have the flexibility to implement Exec, Attach and PortForward the way they want, and also allows a simple test path. Runtimes can change the underlying implementation to support any kind of feature without having a need to modify the CRI at all.

Many smaller enhancements to this overall approach have been merged into Kubernetes in the past years, but the general pattern has always stayed the same. The kubelet source code transformed into a reusable library, which is nowadays usable from container runtimes to implement the basic streaming capability.

How does the streaming actually work?

At a first glance, it looks like all three RPCs work the same way, but that's not the case. It's possible to group the functionality of Exec and Attach, while PortForward follows a distinct internal protocol definition.

Exec and Attach

Kubernetes defines Exec and Attach as remote commands, where its protocol definition exists in five different versions:

#

Version

Note

1

channel.k8s.io

Initial (unversioned) SPDY sub protocol (#13394, #13395)

2

v2.channel.k8s.io

Resolves the issues present in the first version (#15961)

3

v3.channel.k8s.io

Adds support for resizing container terminals (#25273)

4

v4.channel.k8s.io

Adds support for exit codes using JSON errors (#26541)

5

v5.channel.k8s.io

Adds support for a CLOSE signal (#119157)

On top of that, there is an overall effort to replace the SPDY transport protocol using WebSockets as part KEP #4006. Runtimes have to satisfy those protocols over their life cycle to stay up to date with the Kubernetes implementation.

Let's assume that a client uses the latest (v5) version of the protocol as well as communicating over WebSockets. In that case, the general flow would be:

The client requests an URL endpoint for Exec or Attach using the CRI.

The server (runtime) validates the request, inserts it into a connection tracking cache, and provides the HTTP endpoint URL for that request.

The client connects to that URL, upgrades the connection to establish a WebSocket, and starts to stream data.

In the case of Attach, the server has to stream the main container process data to the client.

In the case of Exec, the server has to create the subprocess command within the container and then streams the output to the client.

If stdin is required, then the server needs to listen for that as well and redirect it to the corresponding process.

Interpreting data for the defined protocol is fairly simple: The first byte of every input and output packet defines the actual stream:

First Byte

Type

Description

0

standard input

Data streamed from stdin

1

standard output

Data streamed to stdout

2

standard error

Data streamed to stderr

3

stream error

A streaming error occurred

4

stream resize

A terminal resize event

255

stream close

Stream should be closed (for WebSockets)

How should runtimes now implement the streaming server methods for Exec and Attach by using the provided kubelet library? The key is that the streaming server implementation in the kubelet outlines an interface called Runtime which has to be fulfilled by the actual container runtime if it wants to use that library:

// Runtime is the interface to execute the commands and provide the streams. type Runtime interface { Exec(ctx context.Context, containerID string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error Attach(ctx context.Context, containerID string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error PortForward(ctx context.Context, podSandboxID string, port int32, stream io.ReadWriteCloser) error }

Everything related to the protocol interpretation is already in place and runtimes only have to implement the actual Exec and Attach logic. For example, the container runtime CRI-O does it like this pseudo code:

func (s StreamService) Exec( ctx context.Context, containerID string, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resizeChan <-chan remotecommand.TerminalSize, ) error { // Retrieve the container by the provided containerID // …

// Update the container status and verify that the workload is running // …

// Execute the command and stream the data return s.runtimeServer.Runtime().ExecContainer( s.ctx, c, cmd, stdin, stdout, stderr, tty, resizeChan, ) }

PortForward

Forwarding ports to a container works a bit differently when comparing it to streaming IO data from a workload. The server still has to provide a URL endpoint for the client to connect to, but then the container runtime has to enter the network namespace of the container, allocate the port as well as stream the data back and forth. There is n

·kubernetes.io·
Container Runtime Interface streaming explained
Marp: Markdown Presentation Ecosystem
Marp: Markdown Presentation Ecosystem

Marp: Markdown Presentation Ecosystem

Marp:Markdown Presentation Ecosystem Find Marp tools on GitHub! Create beautiful slide decks using an intuitive Markdown experience Marp (also known as the…

April 30, 2024 at 02:43PM

via Instapaper

·marp.app·
Marp: Markdown Presentation Ecosystem
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
Large Language Models (LLMs) are revolutionizing the way we interact with technology. But with all this innovation comes a new vocabulary! Fear not, fellow AI enthusiasts, for this blog is your decoder ring to the fascinating world of LLM lingo. Let's dive into some essential terms:
·thecloudgirl.dev·
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
The Verge hires Robison to cover artificial intelligence - Talking Biz News
The Verge hires Robison to cover artificial intelligence - Talking Biz News

The Verge hires Robison to cover artificial intelligence - Talking Biz News

Kylie Robison Kylie Robison is joining as senior AI reporter, where she’ll lead the technology publication’s coverage of artificial intelligence. She will start…

April 30, 2024 at 01:28PM

via Instapaper

·talkingbiznews.com·
The Verge hires Robison to cover artificial intelligence - Talking Biz News
How an empty S3 bucket can make your AWS bill explode
How an empty S3 bucket can make your AWS bill explode

How an empty S3 bucket can make your AWS bill explode

A few weeks ago, I began working on the PoC of a document indexing system for my client. I created a single S3 bucket in the eu-west-1 region and uploaded some…

April 30, 2024 at 10:31AM

via Instapaper

·medium.com·
How an empty S3 bucket can make your AWS bill explode
The hyper-clouds are open source's friends
The hyper-clouds are open source's friends

The hyper-clouds are open source's friends

Opinion One of the knee-jerk arguments made by companies abandoning their open source roots is that they can't make money because the bad hyper-cloud companies…

April 30, 2024 at 10:16AM

via Instapaper

·theregister.com·
The hyper-clouds are open source's friends
Atmosphere Verified Operating System
Atmosphere Verified Operating System
Atmosphere is a full-featured microkernel developed in Rust and verified with Verus. Conceptually Atmosphere is similar to the line of L4 microkernels. Atmosphere pushes most kernel functionality to user-space, e.g., device drivers, network stack, file systems, etc. The microkernel supports a minimal set of mechanisms to implement address spaces, page-tables, coarse-grained memory management, and threads of execution that together with address spaces implement an abstraction of a process. Each process has a page table and a collection of schedulable threads.
·mars-research.github.io·
Atmosphere Verified Operating System
Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA
Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA

Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA

https://kubernetes.io/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/

With the release of Kubernetes 1.30, the feature to prevent the modification of the volume mode of a PersistentVolumeClaim that was created from an existing VolumeSnapshot in a Kubernetes cluster, has moved to GA!

The problem

The Volume Mode of a PersistentVolumeClaim refers to whether the underlying volume on the storage device is formatted into a filesystem or presented as a raw block device to the Pod that uses it.

Users can leverage the VolumeSnapshot feature, which has been stable since Kubernetes v1.20, to create a PersistentVolumeClaim (shortened as PVC) from an existing VolumeSnapshot in the Kubernetes cluster. The PVC spec includes a dataSource field, which can point to an existing VolumeSnapshot instance. Visit Create a PersistentVolumeClaim from a Volume Snapshot for more details on how to create a PVC from an existing VolumeSnapshot in a Kubernetes cluster.

When leveraging the above capability, there is no logic that validates whether the mode of the original volume, whose snapshot was taken, matches the mode of the newly created volume.

This presents a security gap that allows malicious users to potentially exploit an as-yet-unknown vulnerability in the host operating system.

There is a valid use case to allow some users to perform such conversions. Typically, storage backup vendors convert the volume mode during the course of a backup operation, to retrieve changed blocks for greater efficiency of operations. This prevents Kubernetes from blocking the operation completely and presents a challenge in distinguishing trusted users from malicious ones.

Preventing unauthorized users from converting the volume mode

In this context, an authorized user is one who has access rights to perform update or patch operations on VolumeSnapshotContents, which is a cluster-level resource.

It is up to the cluster administrator to provide these rights only to trusted users or applications, like backup vendors. Users apart from such authorized ones will never be allowed to modify the volume mode of a PVC when it is being created from a VolumeSnapshot.

To convert the volume mode, an authorized user must do the following:

Identify the VolumeSnapshot that is to be used as the data source for a newly created PVC in the given namespace.

Identify the VolumeSnapshotContent bound to the above VolumeSnapshot.

kubectl describe volumesnapshot -n <namespace> <name>

Add the annotation snapshot.storage.kubernetes.io/allow-volume-mode-change: "true" to the above VolumeSnapshotContent. The VolumeSnapshotContent annotations must include one similar to the following manifest fragment:

kind: VolumeSnapshotContent metadata: annotations:

  • snapshot.storage.kubernetes.io/allow-volume-mode-change: "true" ...

Note: For pre-provisioned VolumeSnapshotContents, you must take an extra step of setting spec.sourceVolumeMode field to either Filesystem or Block, depending on the mode of the volume from which this snapshot was taken.

An example is shown below:

apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: annotations:

  • snapshot.storage.kubernetes.io/allow-volume-mode-change: "true" name: <volume-snapshot-content-name> spec: deletionPolicy: Delete driver: hostpath.csi.k8s.io source: snapshotHandle: <snapshot-handle> sourceVolumeMode: Filesystem volumeSnapshotRef: name: <volume-snapshot-name> namespace: <namespace>

Repeat steps 1 to 3 for all VolumeSnapshotContents whose volume mode needs to be converted during a backup or restore operation. This can be done either via software with credentials of an authorized user or manually by the authorized user(s).

If the annotation shown above is present on a VolumeSnapshotContent object, Kubernetes will not prevent the volume mode from being converted. Users should keep this in mind before they attempt to add the annotation to any VolumeSnapshotContent.

Action required

The prevent-volume-mode-conversion feature flag is enabled by default in the external-provisioner v4.0.0 and external-snapshotter v7.0.0. Volume mode change will be rejected when creating a PVC from a VolumeSnapshot unless the steps described above have been performed.

What's next

To determine which CSI external sidecar versions support this feature, please head over to the CSI docs page. For any queries or issues, join Kubernetes on Slack and create a thread in the #csi or #sig-storage channel. Alternately, create an issue in the CSI external-snapshotter repository.

via Kubernetes Blog https://kubernetes.io/

April 29, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA
Earth Formation Site
Earth Formation Site
It's not far from the sign marking the exact latitude and longitude of the Earth's core.
·xkcd.com·
Earth Formation Site
Cultivating a culture of lifelong learning
Cultivating a culture of lifelong learning
In a world where new technologies, market trends, and ways of working emerge at a rapid pace, organizations that prioritize lifelong learning are better positioned to navigate them successfully.
·chieflearningofficer.com·
Cultivating a culture of lifelong learning
How Platform Engineering Compares to Running a Restaurant
How Platform Engineering Compares to Running a Restaurant

How Platform Engineering Compares to Running a Restaurant

Dive into the fascinating world of platform engineering while we draw parallels between the complex operations of a bustling eatery and the intricate processes of platform engineering. Just as a successful restaurant relies on a harmonious blend of ingredients, staff, and ambiance to delight customers, platform engineering integrates various technologies, teams, and practices to deliver robust software solutions. Join us as we explore the similarities in skill sets in both fields.

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: DoubleCloud 🔗 https://double.cloud 🔗 Save time & costs by streamlining data pipelines with zero-maintenance open-source solutions. From ingestion to visualization: all integrated, fully managed, and highly reliable, so your engineers will love working with data. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

PlatformEngineering #InternalDeveloperPlatform #IDP

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Platform Engineering vs. Restaurant 01:54 DoubleCloud (sponsor) 02:54 Platform Engineering vs. Restaurant (cont.)

via YouTube https://www.youtube.com/watch?v=vHQtWrqrFho

·youtube.com·
How Platform Engineering Compares to Running a Restaurant
tchx84/Flatseal: Manage Flatpak permissions
tchx84/Flatseal: Manage Flatpak permissions
Manage Flatpak permissions. Contribute to tchx84/Flatseal development by creating an account on GitHub.
·github.com·
tchx84/Flatseal: Manage Flatpak permissions
How Burnout Became Normal — and How to Push Back Against It
How Burnout Became Normal — and How to Push Back Against It
Slowly but steadily, while we’ve been preoccupied with trying to meet demands that outstrip our resources, grappling with unfair treatment, or watching our working hours encroach upon our downtime, burnout has become the new baseline in many work environments. From the 40% of Gen Z workers who believe burnout is an inevitable part of success, to executives who believe high-pressure, “trial-by-fire” assignments are a required rite of passage, to toxic hustle culture that pushes busyness as a badge of honor, too many of us now expect to feel overwhelmed, over-stressed, and eventually burned out at work. When pressures are mounting and your work environment continues to be stressful, it’s all the more important to take proactive steps to return to your personal sweet spot of stress and remain there as long as you can. The author presents several strategies.
·hbr.org·
How Burnout Became Normal — and How to Push Back Against It
Project Bluefin Tour on Framework Laptop
Project Bluefin Tour on Framework Laptop
Project Bluefin is a new Linux distribution designed for reliability, performance, and sustainability. Bluefin is built with the Cloud Native Desktop model. ...
·youtube.com·
Project Bluefin Tour on Framework Laptop