1_r/devopsish

1_r/devopsish

54514 bookmarks
Custom sorting
Five Key Requirements for a Successful OSPO - Nithya Ruff Amazon
Five Key Requirements for a Successful OSPO - Nithya Ruff Amazon

Five Key Requirements for a Successful OSPO - Nithya Ruff, Amazon

Five Key Requirements for a Successful OSPO - Nithya Ruff, Amazon In the past 2 years we have seen some major OSPOs shrink and some have even gone away. We are…

May 3, 2024 at 02:45PM

via Instapaper

·youtube.com·
Five Key Requirements for a Successful OSPO - Nithya Ruff Amazon
What's a CNCF Ambassador? - Containerized Adventures
What's a CNCF Ambassador? - Containerized Adventures

What's a CNCF Ambassador? - Containerized Adventures

This post is written based on my own experiences and opinions as a CNCF Ambassador. The CNCF Ambassador program application is open now (January 25, 2024 –…

May 3, 2024 at 11:39AM

via Instapaper

·kaslin.rocks·
What's a CNCF Ambassador? - Containerized Adventures
Orange-OpenSource/hurl: Hurl run and test HTTP requests with plain text.
Orange-OpenSource/hurl: Hurl run and test HTTP requests with plain text.

Orange-OpenSource/hurl: Hurl, run and test HTTP requests with plain text.

What's Hurl? Hurl is a command line tool that runs HTTP requests defined in a simple plain text format. It can chain requests, capture values and evaluate…

May 3, 2024 at 09:42AM

via Instapaper

·github.com·
Orange-OpenSource/hurl: Hurl run and test HTTP requests with plain text.
Dokploy/dokploy: Open Source Alternative to Vercel Netlify and Heroku.
Dokploy/dokploy: Open Source Alternative to Vercel Netlify and Heroku.

Dokploy/dokploy: Open Source Alternative to Vercel, Netlify and Heroku.

Create list Beta Lists are currently in beta. Share feedback and report bugs. dokploy Public Unwatch Stop ignoring Watch 8 Notifications Fork 76 Fork your own…

May 3, 2024 at 09:41AM

via Instapaper

·github.com·
Dokploy/dokploy: Open Source Alternative to Vercel Netlify and Heroku.
Fedora CoreOS Numbers 05/2024 edition - Fedora Discussion
Fedora CoreOS Numbers 05/2024 edition - Fedora Discussion
Fedora publishes countme data weekly into a database that can be picked up from here. For the week of 2024-04-28 data it looks like our FCOS node count has increasted to just under 70k (20.8k transient, 48.6k long running): Our architecture breakdown is about 30.5% aarch64 and 69.4% x86_64. Our breakdown based on Fedora Linux release shows the majority of users still on Fedora 39, which is expected since our stable release hasn’t switched over yet to Fedora 40, but will next week. We can...
·discussion.fedoraproject.org·
Fedora CoreOS Numbers 05/2024 edition - Fedora Discussion
Releases Distribution Changes - OpenSSL Blog
Releases Distribution Changes - OpenSSL Blog

Releases Distribution Changes - OpenSSL Blog

I’d like to give you a heads-up about some changes we’re making at OpenSSL. We’re simplifying how you can get our software, and that means we’re phasing out…

May 2, 2024 at 06:57AM

via Instapaper

·openssl.org·
Releases Distribution Changes - OpenSSL Blog
Week Ending April 28 2024
Week Ending April 28 2024

Week Ending April 28, 2024

https://lwkd.info/2024/20240429

Developer News

We have two new Working Groups, built around the needs of new workloads like AI/ML:

WG Device Management will develop tooling and infrastructure to help users add accelerators and other specialized hardware to their Kubernetes clusters

WG Serving will enable AI/ML inference workloads that are not batch-oriented (as a complement to WG Batch)

SIG-Docs is having almost total leadership turnover with old leaders stepping down, new ones stepping up, and some folks swapping roles.

SIG-Architecture has published new guidance for when a feature can skip Alpha release.

Reminder: SIG Annual Reports are due by May 1. It’s mostly automated now, so please get it done. Any contributor to the SIG can work on the report, not just the Leads.

Release Schedule

Next Deadline: v1.31 cycle starts, May 13th, 2024

We’re in the period between releases. Shadow applications for the v1.31 release team are open until May 15. The tentative dates for the v1.31 cycle are from May 13th to August 15th, 2024.

KEP of the Week

4138: Pod Conditions for Starting and Completion of Sandbox Creation

The KEP adds a pod condition called PodReadyToStartContainers. It shows pod readiness to start containers immediately after pod sandbox creation. It provides a clear indication to cluster administrators when the initialization phase of successfully scheduled pods is completed. Existing conditions such as PodScheduled and Initialized do not adequately convey this specific phase of pod lifecycle. With this Enhancement, users can monitor and analyze pod sandbox creation latency metrics. This can assist in setting Service Level Objectives (SLOs) and can be used by custom controllers and operators to optimize reconciliation strategies for sandbox creation failures.

This KEP is tracked to promote to beta in the v1.30 release.

Other Merges

Validate common name formats in CEL

client-go’s REST client gets WatchList access

Prevent a race condition in the transforming informer, including resync; backported

--hostname-override works correctly with external cloud providers

Add a function to check etcd supported features

Reorganize kube-proxy metrics (“and stuff”), giving nftables mode its own metrics

kubeadm: remember to download the config during upgrade, use output/v1alpha3 for printing

Remove cloudprovider code from volume managers

Kubemark supports burst and qps tests

New metrics: not-really-invalid packets

Contextual logging: component-helpers

Test Cleanup: TrafficDistribution, watch cache

Deprecated

remove deprecated output.kubeadm.k8s.io/v1alpha2 API

enable-client-cert-rotation is the new experimental-cert-rotation

remove deprecated DefaultHostNetworkHostPortsInPodTemplates feature gate

Remove pre-1.20 checkpoint support from DeviceManager

Version Updates

sigs.k8s.io/yaml to 1.4.0

cri-tools to 1.30.0

cel-go to 0.20.1, changes optional to optional_type

Subprojects and Dependency Updates

Kernel Module Management 2.1.0: GC delay, reorder kmod loading.

kubernetes-sig/kubebuilder v3.14.1: Upgrades to controller runtime, bug fixes.

kubernetes/kompose v1.33.0: Ability to select stage in multistage dockerfile, labels for initContainers, networkmode service.

kubernetes/cloud-provider-openstack openstack-cinder-csi-2.29.1.

etcd-io/etcd v3.4.32: Fix to LeaseTimeToLive returning error, updates to compaction log.

containerd/containerd v1.7.16: HPC port forwarding, updates to HTTP fallback to better account for TLS timeout.

cri-o/crio-o: Update pinned images list on config reload, keep track of exec calls for container.

grpc/grpc v1.63.0: API to inject connected endpoints into servers, upgrades to Protobuf.

via Last Week in Kubernetes Development https://lwkd.info/

April 29, 2024 at 11:48AM

·lwkd.info·
Week Ending April 28 2024
Carbon
Carbon
Carbon is the easiest way to create and share beautiful images of your source code.
·carbon.now.sh·
Carbon
Tools to make your presentation shine
Tools to make your presentation shine
Diagramsexcalidraw.comCode "Screenshots"CarbonCarbon is the easiest way to create and share beautiful images of your source code.Carboncarbon_appCode "Demos"Make Your CLI Demos a Breeze with Zero Stress and Zero Mistakesp Running live demos can be stressful. You know what you want to say and/p
·anaisurl.com·
Tools to make your presentation shine
Starting an Open Source Project
Starting an Open Source Project

Starting an Open Source Project

Open Source Guides Starting an Open Source Project Once a company has participated in open source communities long enough to build a reputation, it’s in a…

May 1, 2024 at 10:06AM

via Instapaper

·linuxfoundation.org·
Starting an Open Source Project
Software vendors dump open source go for the cash grab
Software vendors dump open source go for the cash grab

Software vendors dump open source, go for the cash grab

Essentially, all software is built using open source. By Synopsys’ count, 96% of all codebases contain open-source software. Lately, though, there’s been a very…

May 1, 2024 at 09:42AM

via Instapaper

·computerworld.com·
Software vendors dump open source go for the cash grab
The end of vendor-backed open source?
The end of vendor-backed open source?

The end of vendor-backed open source?

Software vendors dump open source, go for the cash grab

May 1, 2024 at 09:42AM

via Instapaper

·infoworld.com·
The end of vendor-backed open source?
Container Runtime Interface streaming explained
Container Runtime Interface streaming explained

Container Runtime Interface streaming explained

https://kubernetes.io/blog/2024/05/01/cri-streaming-explained/

The Kubernetes Container Runtime Interface (CRI) acts as the main connection between the kubelet and the Container Runtime. Those runtimes have to provide a gRPC server which has to fulfill a Kubernetes defined Protocol Buffer interface. This API definition evolves over time, for example when contributors add new features or fields are going to become deprecated.

In this blog post, I'd like to dive into the functionality and history of three extraordinary Remote Procedure Calls (RPCs), which are truly outstanding in terms of how they work: Exec, Attach and PortForward.

Exec can be used to run dedicated commands within the container and stream the output to a client like kubectl or crictl. It also allows interaction with that process using standard input (stdin), for example if users want to run a new shell instance within an existing workload.

Attach streams the output of the currently running process via standard I/O from the container to the client and also allows interaction with them. This is particularly useful if users want to see what is going on in the container and be able to interact with the process.

PortForward can be utilized to forward a port from the host to the container to be able to interact with it using third party network tools. This allows it to bypass Kubernetes services for a certain workload and interact with its network interface.

What is so special about them?

All RPCs of the CRI either use the gRPC unary calls for communication or the server side streaming feature (only GetContainerEvents right now). This means that mainly all RPCs retrieve a single client request and have to return a single server response. The same applies to Exec, Attach, and PortForward, where their protocol definition looks like this:

// Exec prepares a streaming endpoint to execute a command in the container. rpc Exec(ExecRequest) returns (ExecResponse) {}

// Attach prepares a streaming endpoint to attach to a running container. rpc Attach(AttachRequest) returns (AttachResponse) {}

// PortForward prepares a streaming endpoint to forward ports from a PodSandbox. rpc PortForward(PortForwardRequest) returns (PortForwardResponse) {}

The requests carry everything required to allow the server to do the work, for example, the ContainerId or command (Cmd) to be run in case of Exec. More interestingly, all of their responses only contain a url:

message ExecResponse { // Fully qualified URL of the exec streaming server. string url = 1; }

message AttachResponse { // Fully qualified URL of the attach streaming server. string url = 1; }

message PortForwardResponse { // Fully qualified URL of the port-forward streaming server. string url = 1; }

Why is it implemented like that? Well, the original design document for those RPCs even predates Kubernetes Enhancements Proposals (KEPs) and was originally outlined back in 2016. The kubelet had a native implementation for Exec, Attach, and PortForward before the initiative to bring the functionality to the CRI started. Before that, everything was bound to Docker or the later abandoned container runtime rkt.

The CRI related design document also elaborates on the option to use native RPC streaming for exec, attach, and port forward. The downsides outweighed this approach: the kubelet would still create a network bottleneck and future runtimes would not be free in choosing the server implementation details. Also, another option that the Kubelet implements a portable, runtime-agnostic solution has been abandoned over the final one, because this would mean another project to maintain which nevertheless would be runtime dependent.

This means, that the basic flow for Exec, Attach and PortForward was proposed to look like this:

sequenceDiagram participant crictl participant kubectl participant API as API Server participant kubelet participant runtime as Container Runtime participant streaming as Streaming Server alt Client alternatives Note over kubelet,runtime: Container Runtime Interface (CRI) kubectl->>API: exec, attach, port-forward API->>kubelet: kubelet->>runtime: Exec, Attach, PortForward else Note over crictl,runtime: Container Runtime Interface (CRI) crictl->>runtime: Exec, Attach, PortForward end runtime->>streaming: New Session streaming->>runtime: HTTP endpoint (URL) alt Client alternatives runtime->>kubelet: Response URL kubelet->>API: API-->>streaming: Connection upgrade (SPDY or WebSocket) streaming-)API: Stream data API-)kubectl: Stream data else runtime->>crictl: Response URL crictl-->>streaming: Connection upgrade (SPDY or WebSocket) streaming-)crictl: Stream data end

Clients like crictl or the kubelet (via kubectl) request a new exec, attach or port forward session from the runtime using the gRPC interface. The runtime implements a streaming server that also manages the active sessions. This streaming server provides an HTTP endpoint for the client to connect to. The client upgrades the connection to use the SPDY streaming protocol or (in the future) to a WebSocket connection and starts to stream the data back and forth.

This implementation allows runtimes to have the flexibility to implement Exec, Attach and PortForward the way they want, and also allows a simple test path. Runtimes can change the underlying implementation to support any kind of feature without having a need to modify the CRI at all.

Many smaller enhancements to this overall approach have been merged into Kubernetes in the past years, but the general pattern has always stayed the same. The kubelet source code transformed into a reusable library, which is nowadays usable from container runtimes to implement the basic streaming capability.

How does the streaming actually work?

At a first glance, it looks like all three RPCs work the same way, but that's not the case. It's possible to group the functionality of Exec and Attach, while PortForward follows a distinct internal protocol definition.

Exec and Attach

Kubernetes defines Exec and Attach as remote commands, where its protocol definition exists in five different versions:

#

Version

Note

1

channel.k8s.io

Initial (unversioned) SPDY sub protocol (#13394, #13395)

2

v2.channel.k8s.io

Resolves the issues present in the first version (#15961)

3

v3.channel.k8s.io

Adds support for resizing container terminals (#25273)

4

v4.channel.k8s.io

Adds support for exit codes using JSON errors (#26541)

5

v5.channel.k8s.io

Adds support for a CLOSE signal (#119157)

On top of that, there is an overall effort to replace the SPDY transport protocol using WebSockets as part KEP #4006. Runtimes have to satisfy those protocols over their life cycle to stay up to date with the Kubernetes implementation.

Let's assume that a client uses the latest (v5) version of the protocol as well as communicating over WebSockets. In that case, the general flow would be:

The client requests an URL endpoint for Exec or Attach using the CRI.

The server (runtime) validates the request, inserts it into a connection tracking cache, and provides the HTTP endpoint URL for that request.

The client connects to that URL, upgrades the connection to establish a WebSocket, and starts to stream data.

In the case of Attach, the server has to stream the main container process data to the client.

In the case of Exec, the server has to create the subprocess command within the container and then streams the output to the client.

If stdin is required, then the server needs to listen for that as well and redirect it to the corresponding process.

Interpreting data for the defined protocol is fairly simple: The first byte of every input and output packet defines the actual stream:

First Byte

Type

Description

0

standard input

Data streamed from stdin

1

standard output

Data streamed to stdout

2

standard error

Data streamed to stderr

3

stream error

A streaming error occurred

4

stream resize

A terminal resize event

255

stream close

Stream should be closed (for WebSockets)

How should runtimes now implement the streaming server methods for Exec and Attach by using the provided kubelet library? The key is that the streaming server implementation in the kubelet outlines an interface called Runtime which has to be fulfilled by the actual container runtime if it wants to use that library:

// Runtime is the interface to execute the commands and provide the streams. type Runtime interface { Exec(ctx context.Context, containerID string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error Attach(ctx context.Context, containerID string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error PortForward(ctx context.Context, podSandboxID string, port int32, stream io.ReadWriteCloser) error }

Everything related to the protocol interpretation is already in place and runtimes only have to implement the actual Exec and Attach logic. For example, the container runtime CRI-O does it like this pseudo code:

func (s StreamService) Exec( ctx context.Context, containerID string, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resizeChan <-chan remotecommand.TerminalSize, ) error { // Retrieve the container by the provided containerID // …

// Update the container status and verify that the workload is running // …

// Execute the command and stream the data return s.runtimeServer.Runtime().ExecContainer( s.ctx, c, cmd, stdin, stdout, stderr, tty, resizeChan, ) }

PortForward

Forwarding ports to a container works a bit differently when comparing it to streaming IO data from a workload. The server still has to provide a URL endpoint for the client to connect to, but then the container runtime has to enter the network namespace of the container, allocate the port as well as stream the data back and forth. There is n

·kubernetes.io·
Container Runtime Interface streaming explained
Marp: Markdown Presentation Ecosystem
Marp: Markdown Presentation Ecosystem

Marp: Markdown Presentation Ecosystem

Marp:Markdown Presentation Ecosystem Find Marp tools on GitHub! Create beautiful slide decks using an intuitive Markdown experience Marp (also known as the…

April 30, 2024 at 02:43PM

via Instapaper

·marp.app·
Marp: Markdown Presentation Ecosystem
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
Large Language Models (LLMs) are revolutionizing the way we interact with technology. But with all this innovation comes a new vocabulary! Fear not, fellow AI enthusiasts, for this blog is your decoder ring to the fascinating world of LLM lingo. Let's dive into some essential terms:
·thecloudgirl.dev·
I’m a sucker for cheat sheets | LLM Cheatsheet: Top 15 LLM Terms You Need to Know in 2024 — The Cloud Girl
The Verge hires Robison to cover artificial intelligence - Talking Biz News
The Verge hires Robison to cover artificial intelligence - Talking Biz News

The Verge hires Robison to cover artificial intelligence - Talking Biz News

Kylie Robison Kylie Robison is joining as senior AI reporter, where she’ll lead the technology publication’s coverage of artificial intelligence. She will start…

April 30, 2024 at 01:28PM

via Instapaper

·talkingbiznews.com·
The Verge hires Robison to cover artificial intelligence - Talking Biz News
How an empty S3 bucket can make your AWS bill explode
How an empty S3 bucket can make your AWS bill explode

How an empty S3 bucket can make your AWS bill explode

A few weeks ago, I began working on the PoC of a document indexing system for my client. I created a single S3 bucket in the eu-west-1 region and uploaded some…

April 30, 2024 at 10:31AM

via Instapaper

·medium.com·
How an empty S3 bucket can make your AWS bill explode
The hyper-clouds are open source's friends
The hyper-clouds are open source's friends

The hyper-clouds are open source's friends

Opinion One of the knee-jerk arguments made by companies abandoning their open source roots is that they can't make money because the bad hyper-cloud companies…

April 30, 2024 at 10:16AM

via Instapaper

·theregister.com·
The hyper-clouds are open source's friends
Atmosphere Verified Operating System
Atmosphere Verified Operating System
Atmosphere is a full-featured microkernel developed in Rust and verified with Verus. Conceptually Atmosphere is similar to the line of L4 microkernels. Atmosphere pushes most kernel functionality to user-space, e.g., device drivers, network stack, file systems, etc. The microkernel supports a minimal set of mechanisms to implement address spaces, page-tables, coarse-grained memory management, and threads of execution that together with address spaces implement an abstraction of a process. Each process has a page table and a collection of schedulable threads.
·mars-research.github.io·
Atmosphere Verified Operating System