Suggested Reads

Suggested Reads

54832 bookmarks
Newest
Test of a prototype quantum internet runs under New York City for half a month
Test of a prototype quantum internet runs under New York City for half a month
To introduce quantum networks into the marketplace, engineers must overcome the fragility of entangled states in a fiber cable and ensure the efficiency of signal delivery. Now, scientists at Qunnect Inc. in Brooklyn, New York, have taken a large step forward by operating just such a network under the streets of New York City.
·phys.org·
Test of a prototype quantum internet runs under New York City for half a month
Is the Open Source Bubble about to Burst?
Is the Open Source Bubble about to Burst?
Is the Open Source bubble about to burst? While Free and Open Source Software (FOSS) isn’t just another tech trend, it’s facing real challenges as its adoption skyrockets. With millions…
·tarakiyee.com·
Is the Open Source Bubble about to Burst?
The Best Kids Bike Helmets
The Best Kids Bike Helmets
Kids bike helmets have come a long way in recent years. We tested 13 of them, prioritizing safety, comfort, and kid appeal. These are our three favorites.
·nytimes.com·
The Best Kids Bike Helmets
Continuous reinvention: A brief history of block storage at AWS
Continuous reinvention: A brief history of block storage at AWS
Marc Olson, a long-time Amazonian, discusses the evolution of EBS, highlighting hard-won lessons in queueing theory, the importance of comprehensive instrumentation, and the value of incrementalism versus radical changes. It's an insightful look at how one of AWS’s foundational services has evolved to meet the needs of our customers.
·allthingsdistributed.com·
Continuous reinvention: A brief history of block storage at AWS
Ask the Developer Vol. 11, Super Mario Bros. Wonder—Part 2
Ask the Developer Vol. 11, Super Mario Bros. Wonder—Part 2
This article has been translated from the original Japanese content. Some of the images and videos shown in text were created during development. In this eleventh volume of Ask the Developer, an interview series in which Nintendo developers convey i…
·nintendo.com·
Ask the Developer Vol. 11, Super Mario Bros. Wonder—Part 2
I've got the genAI blues
I've got the genAI blues
ChatGPT and the like are not nearly as good as you think. And as time goes by, they're getting worse.
·computerworld.com·
I've got the genAI blues
Sourcegraph makes core repository private co-founder complains open source means "extra work and risk" DEVCLASS
Sourcegraph makes core repository private co-founder complains open source means "extra work and risk" DEVCLASS

Sourcegraph makes core repository private, co-founder complains open source means "extra work and risk" • DEVCLASS

Sourcegraph has removed the formerly open source core repository for its popular code search product from public view – with CEO and co-founder Quinn Slack…

August 23, 2024 at 09:16AM

via Instapaper

·devclass.com·
Sourcegraph makes core repository private co-founder complains open source means "extra work and risk" DEVCLASS
Kubernetes v1.31: kubeadm v1beta4
Kubernetes v1.31: kubeadm v1beta4

Kubernetes v1.31: kubeadm v1beta4

https://kubernetes.io/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/

As part of the Kubernetes v1.31 release, kubeadm is adopting a new (v1beta4) version of its configuration file format. Configuration in the previous v1beta3 format is now formally deprecated, which means it's supported but you should migrate to v1beta4 and stop using the deprecated format. Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases.

In this article, I'll walk you through key changes; I'll explain about the kubeadm v1beta4 configuration format, and how to migrate from v1beta3 to v1beta4.

You can read the reference for the v1beta4 configuration format: kubeadm Configuration (v1beta4).

A list of changes since v1beta3

This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields.

To put it simply,

Two new configuration elements: ResetConfiguration and UpgradeConfiguration

For InitConfiguration and JoinConfiguration, dryRun mode and nodeRegistration.imagePullSerial are supported

For ClusterConfiguration, there are new fields including certificateValidityPeriod, caCertificateValidityPeriod, encryptionAlgorithm, dns.disabled and proxy.disabled.

Support extraEnvs for all control plan components

extraArgs changed from a map to structured extra arguments for duplicates

Add a timeouts structure for init, join, upgrade and reset.

For details, you can see the official document below:

Support custom environment variables in control plane components under ClusterConfiguration. Use apiServer.extraEnvs, controllerManager.extraEnvs, scheduler.extraEnvs, etcd.local.extraEnvs.

The ResetConfiguration API type is now supported in v1beta4. Users are able to reset a node by passing a --config file to kubeadm reset.

dryRun mode is now configurable in InitConfiguration and JoinConfiguration.

Replace the existing string/string extra argument maps with structured extra arguments that support duplicates. The change applies to ClusterConfiguration - apiServer.extraArgs, controllerManager.extraArgs, scheduler.extraArgs, etcd.local.extraArgs. Also to nodeRegistrationOptions.kubeletExtraArgs.

Added ClusterConfiguration.encryptionAlgorithm that can be used to set the asymmetric encryption algorithm used for this cluster's keys and certificates. Can be one of "RSA-2048" (default), "RSA-3072", "RSA-4096" or "ECDSA-P256".

Added ClusterConfiguration.dns.disabled and ClusterConfiguration.proxy.disabled that can be used to disable the CoreDNS and kube-proxy addons during cluster initialization. Skipping the related addons phases, during cluster creation will set the same fields to true.

Added the nodeRegistration.imagePullSerial field in InitConfiguration and JoinConfiguration, which can be used to control if kubeadm pulls images serially or in parallel.

The UpgradeConfiguration kubeadm API is now supported in v1beta4 when passing --config to kubeadm upgrade subcommands. For upgrade subcommands, the usage of component configuration for kubelet and kube-proxy, as well as InitConfiguration and ClusterConfiguration, is now deprecated and will be ignored when passing --config.

Added a timeouts structure to InitConfiguration, JoinConfiguration, ResetConfiguration and UpgradeConfiguration that can be used to configure various timeouts. The ClusterConfiguration.timeoutForControlPlane field is replaced by timeouts.controlPlaneComponentHealthCheck. The JoinConfiguration.discovery.timeout is replaced by timeouts.discovery.

Added a certificateValidityPeriod and caCertificateValidityPeriod fields to ClusterConfiguration. These fields can be used to control the validity period of certificates generated by kubeadm during sub-commands such as init, join, upgrade and certs. Default values continue to be 1 year for non-CA certificates and 10 years for CA certificates. Also note that only non-CA certificates are renewable by kubeadm certs renew.

These changes simplify the configuration of tools that use kubeadm and improve the extensibility of kubeadm itself.

How to migrate v1beta3 configuration to v1beta4?

If your configuration is not using the latest version, it is recommended that you migrate using the kubeadm config migrate command.

This command reads an existing configuration file that uses the old format, and writes a new file that uses the current format.

Example

Using kubeadm v1.31, run kubeadm config migrate --old-config old-v1beta3.yaml --new-config new-v1beta4.yaml

How do I get involved?

Huge thanks to all the contributors who helped with the design, implementation, and review of this feature:

Lubomir I. Ivanov (neolit123)

Dave Chen(chendave)

Paco Xu (pacoxu)

Sata Qiu(sataqiu)

Baofa Fan(carlory)

Calvin Chen(calvin0327)

Ruquan Zhao(ruquanzhao)

For those interested in getting involved in future discussions on kubeadm configuration, you can reach out kubeadm or SIG-cluster-lifecycle by several means:

v1beta4 related items are tracked in kubeadm issue #2890.

Slack: #kubeadm or #sig-cluster-lifecycle

Mailing list

via Kubernetes Blog https://kubernetes.io/

August 22, 2024 at 08:00PM

·kubernetes.io·
Kubernetes v1.31: kubeadm v1beta4
janbjorge/PgQueuer
janbjorge/PgQueuer
PgQueuer is a Python library leveraging PostgreSQL for efficient job queuing. - janbjorge/PgQueuer at console.dev
·github.com·
janbjorge/PgQueuer
Introducing Zed AI
Introducing Zed AI
From the Zed Blog: Powerful AI-assisted coding powered by Anthropic's Claude, now available.
·zed.dev·
Introducing Zed AI
Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta
Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta

Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta

https://kubernetes.io/blog/2024/08/22/kubernetes-1-31-custom-profiling-kubectl-debug/

There are many ways of troubleshooting the pods and nodes in the cluster. However, kubectl debug is one of the easiest, highly used and most prominent ones. It provides a set of static profiles and each profile serves for a different kind of role. For instance, from the network administrator's point of view, debugging the node should be as easy as this:

$ kubectl debug node/mynode -it --image=busybox --profile=netadmin

On the other hand, static profiles also bring about inherent rigidity, which has some implications for some pods contrary to their ease of use. Because there are various kinds of pods (or nodes) that all have their specific necessities, and unfortunately, some can't be debugged by only using the static profiles.

Take an instance of a simple pod consisting of a container whose healthiness relies on an environment variable:

apiVersion: v1 kind: Pod metadata: name: example-pod spec: containers:

  • name: example-container image: customapp:latest env:
  • name: REQUIRED_ENV_VAR value: "value1"

Currently, copying the pod is the sole mechanism that supports debugging this pod in kubectl debug. Furthermore, what if user needs to modify the REQUIRED_ENV_VAR to something different for advanced troubleshooting?. There is no mechanism to achieve this.

Custom Profiling

Custom profiling is a new functionality available under --custom flag, introduced in kubectl debug to provide extensibility. It expects partial Container spec in either YAML or JSON format. In order to debug the example-container above by creating an ephemeral container, we simply have to define this YAML:

partial_container.yaml

env:

  • name: REQUIRED_ENV_VAR value: value2

and execute:

kubectl debug example-pod -it --image=customapp --custom=partial_container.yaml

Here is another example that modifies multiple fields at once (change port number, add resource limits, modify environment variable) in JSON:

{ "ports": [ { "containerPort": 80 } ], "resources": { "limits": { "cpu": "0.5", "memory": "512Mi" }, "requests": { "cpu": "0.2", "memory": "256Mi" } }, "env": [ { "name": "REQUIRED_ENV_VAR", "value": "value2" } ] }

Constraints

Uncontrolled extensibility hurts the usability. So that, custom profiling is not allowed for certain fields such as command, image, lifecycle, volume devices and container name. In the future, more fields can be added to the disallowed list if required.

Limitations

The kubectl debug command has 3 aspects: Debugging with ephemeral containers, pod copying, and node debugging. The largest intersection set of these aspects is the container spec within a Pod That's why, custom profiling only supports the modification of the fields that are defined with containers. This leads to a limitation that if user needs to modify the other fields in the Pod spec, it is not supported.

Acknowledgments

Special thanks to all the contributors who reviewed and commented on this feature, from the initial conception to its actual implementation (alphabetical order):

Eddie Zaneski

Maciej Szulik

Lee Verberne

via Kubernetes Blog https://kubernetes.io/

August 21, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta
Kubernetes 1.31: Fine-grained SupplementalGroups control
Kubernetes 1.31: Fine-grained SupplementalGroups control

Kubernetes 1.31: Fine-grained SupplementalGroups control

https://kubernetes.io/blog/2024/08/22/fine-grained-supplementalgroups-control/

This blog discusses a new feature in Kubernetes 1.31 to improve the handling of supplementary groups in containers within Pods.

Motivation: Implicit group memberships defined in /etc/group in the container image

Although this behavior may not be popular with many Kubernetes cluster users/admins, kubernetes, by default, merges group information from the Pod with information defined in /etc/group in the container image.

Let's see an example, below Pod specifies runAsUser=1000, runAsGroup=3000 and supplementalGroups=4000 in the Pod's security context.

implicit-groups.yaml

apiVersion: v1 kind: Pod metadata: name: implicit-groups spec: securityContext: runAsUser: 1000 runAsGroup: 3000 supplementalGroups: [4000] containers:

  • name: ctr image: registry.k8s.io/e2e-test-images/agnhost:2.45 command: [ "sh", "-c", "sleep 1h" ] securityContext: allowPrivilegeEscalation: false

What is the result of id command in the ctr container?

Create the Pod:

$ kubectl apply -f https://k8s.io/blog/2024-08-22-Fine-grained-SupplementalGroups-control/implicit-groups.yaml

Verify that the Pod's Container is running:

$ kubectl get pod implicit-groups

Check the id command

$ kubectl exec implicit-groups -- id

Then, output should be similar to this:

uid=1000 gid=3000 groups=3000,4000,50000

Where does group ID 50000 in supplementary groups (groups field) come from, even though 50000 is not defined in the Pod's manifest at all? The answer is /etc/group file in the container image.

Checking the contents of /etc/group in the container image should show below:

$ kubectl exec implicit-groups -- cat /etc/group ... user-defined-in-image:x:1000: group-defined-in-image:x:50000:user-defined-in-image

Aha! The container's primary user 1000 belongs to the group 50000 in the last entry.

Thus, the group membership defined in /etc/group in the container image for the container's primary user is implicitly merged to the information from the Pod. Please note that this was a design decision the current CRI implementations inherited from Docker, and the community never really reconsidered it until now.

What's wrong with it?

The implicitly merged group information from /etc/group in the container image may cause some concerns particularly in accessing volumes (see kubernetes/kubernetes#112879 for details) because file permission is controlled by uid/gid in Linux. Even worse, the implicit gids from /etc/group can not be detected/validated by any policy engines because there is no clue for the implicit group information in the manifest. This can also be a concern for Kubernetes security.

Fine-grained SupplementalGroups control in a Pod: SupplementaryGroupsPolicy

To tackle the above problem, Kubernetes 1.31 introduces new field supplementalGroupsPolicy in Pod's .spec.securityContext.

This field provies a way to control how to calculate supplementary groups for the container processes in a Pod. The available policy is below:

Merge: The group membership defined in /etc/group for the container's primary user will be merged. If not specified, this policy will be applied (i.e. as-is behavior for backword compatibility).

Strict: it only attaches specified group IDs in fsGroup, supplementalGroups, or runAsGroup fields as the supplementary groups of the container processes. This means no group membership defined in /etc/group for the container's primary user will be merged.

Let's see how Strict policy works.

strict-supplementalgroups-policy.yaml

apiVersion: v1 kind: Pod metadata: name: strict-supplementalgroups-policy spec: securityContext: runAsUser: 1000 runAsGroup: 3000 supplementalGroups: [4000] supplementalGroupsPolicy: Strict containers:

  • name: ctr image: registry.k8s.io/e2e-test-images/agnhost:2.45 command: [ "sh", "-c", "sleep 1h" ] securityContext: allowPrivilegeEscalation: false

Create the Pod:

$ kubectl apply -f https://k8s.io/blog/2024-08-22-Fine-grained-SupplementalGroups-control/strict-supplementalgroups-policy.yaml

Verify that the Pod's Container is running:

$ kubectl get pod strict-supplementalgroups-policy

Check the process identity:

kubectl exec -it strict-supplementalgroups-policy -- id

The output should be similar to this:

uid=1000 gid=3000 groups=3000,4000

You can see Strict policy can exclude group 50000 from groups!

Thus, ensuring supplementalGroupsPolicy: Strict (enforced by some policy mechanism) helps prevent the implicit supplementary groups in a Pod.

Note: Actually, this is not enough because container with sufficient privileges / capability can change its process identity. Please see the following section for details.

Attached process identity in Pod status

This feature also exposes the process identity attached to the first container process of the container via .status.containerStatuses[].user.linux field. It would be helpful to see if implicit group IDs are attached.

... status: containerStatuses:

  • name: ctr user: linux: gid: 3000 supplementalGroups:
  • 3000
  • 4000 uid: 1000 ...

Note: Please note that the values in status.containerStatuses[].user.linux field is the firstly attached process identity to the first container process in the container. If the container has sufficient privilege to call system calls related to process identity (e.g. setuid(2), setgid(2) or setgroups(2), etc.), the container process can change its identity. Thus, the actual process identity will be dynamic.

Feature availability

To enable supplementalGroupsPolicy field, the following components have to be used:

Kubernetes: v1.31 or later, with the SupplementalGroupsPolicy feature gate enabled. As of v1.31, the gate is marked as alpha.

CRI runtime:

containerd: v2.0 or later

CRI-O: v1.31 or later

You can see if the feature is supported in the Node's .status.features.supplementalGroupsPolicy field.

apiVersion: v1 kind: Node ... status: features: supplementalGroupsPolicy: true

What's next?

Kubernetes SIG Node hope - and expect - that the feature will be promoted to beta and eventually general availability (GA) in future releases of Kubernetes, so that users no longer need to enable the feature gate manually.

Merge policy is applied when supplementalGroupsPolicy is not specified, for backwards compatibility.

How can I learn more?

Configure a Security Context for a Pod or Container for the further details of supplementalGroupsPolicy

KEP-3619: Fine-grained SupplementalGroups control

How to get involved?

This feature is driven by the SIG Node community. Please join us to connect with the community and share your ideas and feedback around the above feature and beyond. We look forward to hearing from you!

via Kubernetes Blog https://kubernetes.io/

August 21, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.31: Fine-grained SupplementalGroups control