1_r/devopsish

1_r/devopsish

54499 bookmarks
Custom sorting
20 Years in the Making, GnuCOBOL Is Ready for Industry
20 Years in the Making, GnuCOBOL Is Ready for Industry
GnuCOBOL "has reached an industrial maturity and can compete with proprietary offers in all environments," boasted contributor Fabrice Le Fessant, in a FOSDEM talk.
·thenewstack.io·
20 Years in the Making, GnuCOBOL Is Ready for Industry
What we got wrong at Oculus that Apple got right
What we got wrong at Oculus that Apple got right
by Hugo Barra (former Head of Oculus at Meta) Friends and colleagues have been asking me to share my perspective on the Apple Vision Pro as a product. Inspired by my dear friend Matt Mullenweg&#821…
·hugo.blog·
What we got wrong at Oculus that Apple got right
Week Ending March 10 2024
Week Ending March 10 2024

Week Ending March 10, 2024

https://lwkd.info/2024/20240315

Developer News

Kubernetes Contributor Summit EU is happening next Tuesday, March 19, 2024. Make sure to register by March 15. If you want to bring a family member to social send an email to summit-team@kubernetes.io. We’re eagerly looking forward to receiving your contributions to the unconference topics.

Also, don’t forget to help your SIG staff its table at the Kubernetes Meet and Greet on Kubecon Friday.

Take a peek at the upcoming Kubernetes v1.30 Release in this Blog.

Release Schedule

Next Deadline: Draft Doc PRs, Mar 12th

Kubernetes v1.30.0-beta.0 is live!

Your SIG should be working on any feature blogs, and discussing what “themes” to feature in the Release Notes.

Featured PR

123516: DRA: structured parameters

DRA, or Dynamic Resource Allocation, is a way to bridge new types of schedulable resources into Kubernetes. A common example of this is GPU accelerator cards but the system is built as generically as possible. Maybe you want to schedule based on cooling capacity, or cash register hardware, or nearby interns, it’s up to you. DRA launched as an alpha feature back in 1.26 but came with some hard limitations. Notably the bespoke logic for simulating scale ups and scale downs in cluster-autoscaler had no way to understand how those would interact with these opaque resources. This PR pulls back the veil a tiny bit, keeping things generic but allowing more forms of structured interaction so core tools like the scheduler and autoscalers can understand dynamic resources.

This happens from a few directions. First, on the node itself a DRA driver plugin provides information about what is available locally, which the kubelet publishes as a NodeResourceSlice object. In parallel, an operator component from the DRA implementation creates ResourceClaimParameters as needed to describe a particular resource claim. The claim parameters include CEL selector expressions for each piece of the claim, allowing anything which can evaluate CEL to check them independently of the DRA plugin. These two new objects combine with the existing ResourceClaim object to allow bidirectional communication between Kubernetes components and the DRA plugin without either side needing to wait for the other in most operations.

While this does increase the implementation complexity of a new DRA provider, it also dramatically expands their capabilities. New resources can be managed with effectively zero overhead and without the even greater complexity of custom schedulers or a plugin-driven autoscaler.

KEP of the Week

KEP-647: APIServer Tracing

This KEP proposes to update the kube-apiserver to allow tracing requests. This is proposed to be done with OpenTelemetry libraries and the data will be exported in the OpenTelemetry format. The kube-apiserver currently uses kubernetes/utils/trace for tracing, but we can make use of distributed tracing to improve ease of use and to make analysis of the data easier. The proposed implementation involves wrapping the API Server’s http server and http clients with otelhttp.

This KEP is tracked to graduate to stable in the upcoming v1.30 release.

Other Merges

podLogsDir key in kubelet configuration to configure default location of pod logs.

New custom flag to kubectl debug for adding custom debug profiles.

PodResources API now has initContainers with containerRestartPolicy of Always when SidecarContainers are enabled.

Fix to the disruption controller’s PDB status sync to maintain PDB conditions during an update.

Service NodePort can now be set to 0 if AllocateLoadBalancerNodePorts is false.

Field selector for services that allows filtering by clusterIP field.

The ‘.memorySwap.swapBehaviour’ field in kubelet configuration gets NoSwap as the default value.

kubectl get jobs now prints the status of the listed jobs.

Bugfix for initContainer with containerRestartPolicy Always where it couldn’t update its Pod state from terminated to non-terminated.

The StorageVersionMigration API, which was previously available as a CRD, is now a built-in API.

InitContainer’s image location will now be considered in scheduling when prioritizing nodes.

Almost all printable ASCII characters are now allowed in environment variables.

Added support for configuring multiple JWT authenticators in Structured Authentication Configuration.

New trafficDistribution field added to the Service spec which allows configuring how traffic is distributed to the endpoints of a Service.

JWT authenticator config set via the –authentication-config flag is now dynamically reloaded as the file changes on disk.

Promotions

StructuredAuthorizationConfiguration to beta.

HPAContainerMetrics to GA.

MinDomainsInPodTopologySpread to GA.

ValidatingAdmissionPolicy to GA.

StructuredAuthenticationConfiguration to beta.

KubeletConfigDropInDir to beta.

Version Updates

google.golang.org/protobuf updated to v1.33.0 to resolve CVE-2024-24786.

Subprojects and Dependency Updates

gRPC to v1.62.1 fix a bug that results in no matching virtual host found RPC errors

cloud-provider-openstack to v1.28.2 Implement imagePullSecret support for release 1.28

via Last Week in Kubernetes Development https://lwkd.info/

March 14, 2024 at 11:53PM

·lwkd.info·
Week Ending March 10 2024
Former Treasury Secretary Steve Mnuchin says he's putting together investor group to buy TikTok
Former Treasury Secretary Steve Mnuchin says he's putting together investor group to buy TikTok
Former Treasury Secretary Steven Mnuchin says he’s going to put together an investor group to buy TikTok, a day after the House of Representatives passed a bill that would ban the popular video app in the U.S. if its China-based owner doesn’t sell its stake.
·apnews.com·
Former Treasury Secretary Steve Mnuchin says he's putting together investor group to buy TikTok
KubeCon CloudNativeCon Europe 2024
KubeCon CloudNativeCon Europe 2024
View more about this event at KubeCon + CloudNativeCon Europe 2024
·kccnceu2024.sched.com·
KubeCon CloudNativeCon Europe 2024
Strategy & Impact: DevRel at Conferences
Strategy & Impact: DevRel at Conferences
Learn key strategies for maximizing the impact of speaking at conferences within the DevRel domain. Explore insights on collaboration, metrics alignment, and fostering internal advocacy. Listen to the session recording for more.
·unfairmindshare.com·
Strategy & Impact: DevRel at Conferences
Recognize all contributors
Recognize all contributors
✨ Recognize all contributors, not just the ones who write code. ✨ All Contributors helps praise everyone contributing to open source projects.
·allcontributors.org·
Recognize all contributors
A Peek at Kubernetes v1.30
A Peek at Kubernetes v1.30

A Peek at Kubernetes v1.30

https://kubernetes.io/blog/2024/03/12/kubernetes-1-30-upcoming-changes/

Authors: Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko

A quick look: exciting changes in Kubernetes v1.30

It's a new year and a new Kubernetes release. We're halfway through the release cycle and have quite a few interesting and exciting enhancements coming in v1.30. From brand new features in alpha, to established features graduating to stable, to long-awaited improvements, this release has something for everyone to pay attention to!

To tide you over until the official release, here's a sneak peek of the enhancements we're most excited about in this cycle!

Major changes for Kubernetes v1.30

Structured parameters for dynamic resource allocation (KEP-4381)

Dynamic resource allocation was added to Kubernetes as an alpha feature in v1.26. It defines an alternative to the traditional device-plugin API for requesting access to third-party resources. By design, dynamic resource allocation uses parameters for resources that are completely opaque to core Kubernetes. This approach poses a problem for the Cluster Autoscaler (CA) or any higher-level controller that needs to make decisions for a group of pods (e.g. a job scheduler). It cannot simulate the effect of allocating or deallocating claims over time. Only the third-party DRA drivers have the information available to do this.

​​Structured Parameters for dynamic resource allocation is an extension to the original implementation that addresses this problem by building a framework to support making these claim parameters less opaque. Instead of handling the semantics of all claim parameters themselves, drivers could manage resources and describe them using a specific "structured model" pre-defined by Kubernetes. This would allow components aware of this "structured model" to make decisions about these resources without outsourcing them to some third-party controller. For example, the scheduler could allocate claims rapidly without back-and-forth communication with dynamic resource allocation drivers. Work done for this release centers on defining the framework necessary to enable different "structured models" and to implement the "named resources" model. This model allows listing individual resource instances and, compared to the traditional device plugin API, adds the ability to select those instances individually via attributes.

Node memory swap support (KEP-2400)

In Kubernetes v1.30, memory swap support on Linux nodes gets a big change to how it works - with a strong emphasis on improving system stability. In previous Kubernetes versions, the NodeSwap feature gate was disabled by default, and when enabled, it used UnlimitedSwap behavior as the default behavior. To achieve better stability, UnlimitedSwap behavior (which might compromise node stability) will be removed in v1.30.

The updated, still-beta support for swap on Linux nodes will be available by default. However, the default behavior will be to run the node set to NoSwap (not UnlimitedSwap) mode. In NoSwap mode, the kubelet supports running on a node where swap space is active, but Pods don't use any of the page file. You'll still need to set --fail-swap-on=false for the kubelet to run on that node. However, the big change is the other mode: LimitedSwap. In this mode, the kubelet actually uses the page file on that node and allows Pods to have some of their virtual memory paged out. Containers (and their parent pods) do not have access to swap beyond their memory limit, but the system can still use the swap space if available.

Kubernetes' Node special interest group (SIG Node) will also update the documentation to help you understand how to use the revised implementation, based on feedback from end users, contributors, and the wider Kubernetes community.

Read the previous blog post or the node swap documentation for more details on Linux node swap support in Kubernetes.

Support user namespaces in pods (KEP-127)

User namespaces is a Linux-only feature that better isolates pods to prevent or mitigate several CVEs rated high/critical, including CVE-2024-21626, published in January 2024. In Kubernetes 1.30, support for user namespaces is migrating to beta and now supports pods with and without volumes, custom UID/GID ranges, and more!

Structured authorization configuration (KEP-3221)

Support for structured authorization configuration.) is moving to beta and will be enabled by default. This feature enables the creation of authorization chains with multiple webhooks with well-defined parameters that validate requests in a particular order and allows fine-grained control – such as explicit Deny on failures. The configuration file approach even allows you to specify CEL rules to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary invocations. The API server also automatically reloads the authorizer chain when the configuration file is modified.

You must specify the path to that authorization configuration using the --authorization-config command line argument. If you want to keep using command line flags instead of a configuration file, those will continue to work as-is. To gain access to new authorization webhook capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options in an --authorization-config file. From Kubernetes 1.30, the configuration file format is beta-level, and only requires specifying --authorization-config since the feature gate is enabled by default. An example configuration with all possible values is provided in the Authorization docs. For more details, read the Authorization docs.

Container resource based pod autoscaling (KEP-1610)

Horizontal pod autoscaling based on ContainerResource metrics will graduate to stable in v1.30. This new behavior for HorizontalPodAutoscaler allows you to configure automatic scaling based on the resource usage for individual containers, rather than the aggregate resource use over a Pod. See our previous article for further details, or read container resource metrics.

CEL for admission control (KEP-3488)

Integrating Common Expression Language (CEL) for admission control in Kubernetes introduces a more dynamic and expressive way of evaluating admission requests. This feature allows complex, fine-grained policies to be defined and enforced directly through the Kubernetes API, enhancing security and governance capabilities without compromising performance or flexibility.

CEL's addition to Kubernetes admission control empowers cluster administrators to craft intricate rules that can evaluate the content of API requests against the desired state and policies of the cluster without resorting to Webhook-based access controllers. This level of control is crucial for maintaining the integrity, security, and efficiency of cluster operations, making Kubernetes environments more robust and adaptable to various use cases and requirements. For more information on using CEL for admission control, see the API documentation for ValidatingAdmissionPolicy.

We hope you're as excited for this release as we are. Keep an eye out for the official release blog in a few weeks for more highlights!

via Kubernetes Blog https://kubernetes.io/

March 11, 2024 at 08:00PM

·kubernetes.io·
A Peek at Kubernetes v1.30
Announcing Target’s Open Source Fund
Announcing Target’s Open Source Fund
An article by Brian Muenzenmeyer : Sharing details around Target's new Open Source support fund
·tech.target.com·
Announcing Target’s Open Source Fund
A Deeper Dive into VEX Documents
A Deeper Dive into VEX Documents
Learn how Vulnerability Exploitability eXchange (VEX) documents play a critical role in the efficient management of vulnerabilities in OT environments.
·blog.adolus.com·
A Deeper Dive into VEX Documents
Week Ending March 3 2024
Week Ending March 3 2024

Week Ending March 3, 2024

https://lwkd.info/2024/20240307

Developer News

All CI jobs must be on K8s community infra as of yesterday. While the infra team will migrate ones that are simple, other jobs that you don’t help them move may be deleted. Update your jobs now.

Monitoring dashboards for the GKE and EKS build clusters are live. Also, there was an outage in EKS jobs last week.

After a year of work led by Tim Hockin, Go Workspaces support for hacking on Kubernetes is now available, eliminating a lot of GOPATH pain.

It’s time to start working on your SIG Annual Reports, which you should find a lot shorter and easier than previous years. Note that you don’t have to be the SIG Chair to do these, they just have to review them.

Release Schedule

Next Deadline: Test Freeze, March 27th

Code Freeze is now in effect. If your KEP did not get tracked and you want to get your KEP shipped in the 1.30 release, please file an exception as soon as possible.

March Cherry Pick deadline for patch releases is the 8th.

Featured PRs

122717: KEP-4358: Custom Resource Field Selectors

Selectors in Kubernetes have long been a way to limit large API calls like List and Watch, requesting things with only specific labels or similar. In operators this can be very important to reduce memory usage of shared informer caches, as well as generally keeping apiserver load down. Some core objects extended selectors beyond labels, allowing filtering on other fields such as listing Pods based on spec.nodeName. But this set of fields was limited and could feel random if you didn’t know the specific history of the API (e.g. Pods need a node name filter because it’s the main request made by the kubelet). And it wasn’t available at all to custom type. This PR expands the system, allowing each custom type to declare selector-able fields which will be checked and indexed automatically. The declaration uses JSONPath in a very similar way to the additionalPrinterColumns feature:

selectableFields:

  • jsonPath: .spec.color
  • jsonPath: .spec.size

These can then be used in the API just like any other field selector:

c.List(context.Background(), &redThings, client.MatchingFields{ "spec.color": "red", })

As an alpha feature, this is behind the CustomResourceFieldSelectors feature gate.

KEP of the Week

KEP-1610: Container Resource based Autoscaling

For scaling pods based on resource usage, the HPA currently calculates the sum of all the individual container’s resource usage. This is not suitable for workloads where the containers are not related to each other. This KEP proposes that the HPA also provide an option to scale pods based on the resource usages of individual containers in a Pod. The KEP proposes adding a new ContainerResourceMetricSource metric source, with a new Container field, which will be used to identify the container for which the resources should be tracked. When there are multiple containers in a Pod, the individual resource usages of each container can change at different rates. Adding a way to specify the target gives more fine grained control over the scaling.

This KEP is in beta since v1.27 and is planned to graduate to stable in v1.30.

Other Merges

Tunnel kubectl port-forwarding through websockets

Enhanced conflict detection for Service Account and JWT

Create token duration can be zero

Reject empty usernames in OIDC authentication

OpenAPI V2 won’t publish non-matching group-version

New metrics: authorization webhook match conditions, jwt auth latency, watch cache latency

Kubeadm: list nodes needing upgrades, don’t pass duplicate default flags, better upgrade plans, WaitForAllControlPlaneComponents, upgradeConfiguration timeouts, upgradeConfiguration API

Implement strict JWT compact serialization enforcement

Don’t leak discovery documents via the Spec.Service field

Let the container runtime garbage-collect images by tagging them

Client-Go can upgrade subresource fields, and handles cache deletions

Wait for the ProviderID to be available before initializing a node

Don’t panic if nodecondition is nil

Broadcaster logging is now logging level 3

Access mode label for SELinux mounts

AuthorizationConfiguration v1alpha1 is also v1beta1

Kubelet user mapping IDs are configurable

Filter group versions in aggregated API requests

Match condition e2e tests are conformance

Kubelet gets constants from cadvisor

Promotions

PodSchedulingReadiness to GA

ImageMaximumGCAge to Beta

StructuredAuthorizationConfiguration to beta

MinDomainsInPodTopologySpread to beta

RemoteCommand Over Websockets to beta

ContainerCheckpoint to beta

ServiceAccountToken Info to beta

AggregatedDiscovery v2 to GA

PodHostIPs to GA

Version Updates

cadvisor to v0.49.0

kubedns to 1.23.0

Subprojects and Dependency Updates

kubespray to v2.24.1 Set Kubernetes v1.28.6 as the default Kubernetes version.

prometheus to v2.50.1 Fix for broken /metadata API endpoint

via Last Week in Kubernetes Development https://lwkd.info/

March 07, 2024 at 05:00PM

·lwkd.info·
Week Ending March 3 2024