1_r/devopsish

1_r/devopsish

54498 bookmarks
Custom sorting
Threat Brief: Operation Lunar Peek, Activity Related to CVE-2024-0012 and CVE-2024-9474 (Updated Nov. 22)
Threat Brief: Operation Lunar Peek, Activity Related to CVE-2024-0012 and CVE-2024-9474 (Updated Nov. 22)
We detail the observed limited activity regarding authentication bypass vulnerability CVE-2024-0012 affecting specific versions of PAN-OS software, and include protections and mitigations. We detail the observed limited activity regarding authentication bypass vulnerability CVE-2024-0012 affecting specific versions of PAN-OS software, and include protections and mitigations.
·unit42.paloaltonetworks.com·
Threat Brief: Operation Lunar Peek, Activity Related to CVE-2024-0012 and CVE-2024-9474 (Updated Nov. 22)
China’s Hacking Reached Deep Into U.S. Telecoms
China’s Hacking Reached Deep Into U.S. Telecoms
The chairman of the Senate Intelligence Committee said hackers listened to phone calls and read texts by exploiting aging equipment and seams in the networks that connect systems.
·nytimes.com·
China’s Hacking Reached Deep Into U.S. Telecoms
Argo CD GitOps Promotions with Kargo (by Akuity): A Brilliant Idea with Flawed Execution?
Argo CD GitOps Promotions with Kargo (by Akuity): A Brilliant Idea with Flawed Execution?

Argo CD GitOps Promotions with Kargo (by Akuity): A Brilliant Idea with Flawed Execution?

In this deep dive, we explore how Kargo standardizes promotions, offering visibility and guardrails for your CI/CD pipelines. Learn how to integrate Kargo with Argo CD, manage multi-stage deployments, and tackle the challenges of modern DevOps workflows. Watch now to see Kargo in action and find out if it's the right tool for your DevOps toolkit.

Kargo #Akuity #ArgoCD

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/ci-cd/argo-cd-gitops-promotions-with-kargo-by-akuity-a-brilliant-idea-with-flawed-execution? 🔗 Kargo: https://kargo.io 🎬 GitOps playlist: https://youtube.com/playlist?list=PLyicRj904Z99dJk8bOygbov5up5YYvoZV 🎬 SemVer: https://github.com/masterminds/semver

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Argo CD Promotions with Argo CD and Kargo 05:37 Argo CD ApplicationSet 08:00 Kargo (by Akuity) Promotion Definitions 15:16 Kargo Promotions in Action 23:38 Kargo Critique 28:15 Kargo Pros and Cons

via YouTube https://www.youtube.com/watch?v=RoY7Qu51zwU

·youtube.com·
Argo CD GitOps Promotions with Kargo (by Akuity): A Brilliant Idea with Flawed Execution?
Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency | Amazon Web Services
Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency | Amazon Web Services

Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency | Amazon Web Services

Today, we are adding support for Valkey 8.0 on Amazon ElastiCache. ElastiCache version 8.0 for Valkey brings faster scaling for ElastiCache Serverless and…

November 22, 2024 at 09:29AM

via Instapaper

·aws.amazon.com·
Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency | Amazon Web Services
Last Week in Kubernetes Development - Week Ending November 17 2024
Last Week in Kubernetes Development - Week Ending November 17 2024

Week Ending November 17, 2024

https://lwkd.info/2024/20241120

Developer News

KubeCon NA Salt Lake City was last week! The Kubernetes Contributor Summit was held on Monday, November 11th. Find the photos and meeting notes from the unconference discussions here.

When one Kubecon ends, another one starts: the CfP is open for the Maintainer Summit at Kubecon London. The Summit includes the Kubernetes Contributor Summit plus collaboration with other projects. CfPs for Kubecon Main Track and Colo Events are open as well. And if you’re going to be in Kubecon India, don’t skip the Maintainer Summit there.

There have been some updates to SIG leadership. Richa Banker is nominated as the new chair for SIG Instrumentation and Marko Mudrinić is nominated as a Tech Lead for SIG K8S Infra. Congratulations and thank you for all your work!

Release Schedule

Next Deadline: Docs Freeze, November 26th

Code freeze is in effect from the past week. So far we have 44 enhancements tracked for v1.32 after code freeze. Out of these 18 are in alpha stage, 12 graduating to beta, 13 graduating to GA and one KEP is a deprecation.

The Docs Freeze deadline is coming up. If your KEP is tracked for v1.32, please make sure to get your docs PRs reviewed and merged before the Docs Freeze.

Patch releases 1.29.11, 1.30.7, 1.31.3 are now available.

Other Merges

The DRA kubelet API has its own protobuf package

Adjust resize policy validation to be backwards-compatible

Promotions

InPlacePodVerticalScaling to Beta

via Last Week in Kubernetes Development https://lwkd.info/

November 20, 2024 at 05:00PM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending November 17 2024
Gateway API v1.2: WebSockets Timeouts Retries and More
Gateway API v1.2: WebSockets Timeouts Retries and More

Gateway API v1.2: WebSockets, Timeouts, Retries, and More

https://kubernetes.io/blog/2024/11/21/gateway-api-v1-2/

Kubernetes SIG Network is delighted to announce the general availability of Gateway API v1.2! This version of the API was released on October 3, and we're delighted to report that we now have a number of conformant implementations of it for you to try out.

Gateway API v1.2 brings a number of new features to the Standard channel (Gateway API's GA release channel), introduces some new experimental features, and inaugurates our new release process — but it also brings two breaking changes that you'll want to be careful of.

Breaking changes

GRPCRoute and ReferenceGrant v1alpha2 removal

Now that the v1 versions of GRPCRoute and ReferenceGrant have graduated to Standard, the old v1alpha2 versions have been removed from both the Standard and Experimental channels, in order to ease the maintenance burden that perpetually supporting the old versions would place on the Gateway API community.

Before upgrading to Gateway API v1.2, you'll want to confirm that any implementations of Gateway API have been upgraded to support the v1 API version of these resources instead of the v1alpha2 API version. Note that even if you've been using v1 in your YAML manifests, a controller may still be using v1alpha2 which would cause it to fail during this upgrade. Additionally, Kubernetes itself goes to some effort to stop you from removing a CRD version that it thinks you're using: check out the release notes for more information about what you need to do to safely upgrade.

Change to .status.supportedFeatures (experimental)

A much smaller breaking change: .status.supportedFeatures in a Gateway is now a list of objects instead of a list of strings. The objects have a single name field, so the translation from the strings is straightforward, but moving to objects permits a lot more flexibility for the future. This stanza is not yet present in the Standard channel.

Graduations to the standard channel

Gateway API 1.2.0 graduates four features to the Standard channel, meaning that they can now be considered generally available. Inclusion in the Standard release channel denotes a high level of confidence in the API surface and provides guarantees of backward compatibility. Of course, as with any other Kubernetes API, Standard channel features can continue to evolve with backward-compatible additions over time, and we certainly expect further refinements and improvements to these new features in the future. For more information on how all of this works, refer to the Gateway API Versioning Policy.

HTTPRoute timeouts

GEP-1742 introduced the timeouts stanza into HTTPRoute, permitting configuring basic timeouts for HTTP traffic. This is a simple but important feature for proper resilience when handling HTTP traffic, and it is now Standard.

For example, this HTTPRoute configuration sets a timeout of 300ms for traffic to the /face path:

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: face-with-timeouts namespace: faces spec: parentRefs:

  • name: my-gateway kind: Gateway rules:
  • matches:
  • path: type: PathPrefix value: /face backendRefs:
  • name: face port: 80 timeouts: request: 300ms

For more information, check out the HTTP routing documentation. (Note that this applies only to HTTPRoute timeouts. GRPCRoute timeouts are not yet part of Gateway API.)

Gateway infrastructure labels and annotations

Gateway API implementations are responsible for creating the backing infrastructure needed to make each Gateway work. For example, implementations running in a Kubernetes cluster often create Services and Deployments, while cloud-based implementations may be creating cloud load balancer resources. In many cases, it can be helpful to be able to propagate labels or annotations to these generated resources.

In v1.2.0, the Gateway infrastructure stanza moves to the Standard channel, allowing you to specify labels and annotations for the infrastructure created by the Gateway API controller. For example, if your Gateway infrastructure is running in-cluster, you can specify both Linkerd and Istio injection using the following Gateway configuration, making it simpler for the infrastructure to be incorporated into whichever service mesh you've installed:

apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: meshed-gateway namespace: incoming spec: gatewayClassName: meshed-gateway-class listeners:

  • name: http-listener protocol: HTTP port: 80 infrastructure: labels: istio-injection: enabled annotations: linkerd.io/inject: enabled

For more information, check out the infrastructure API reference.

Backend protocol support

Since Kubernetes v1.20, the Service and EndpointSlice resources have supported a stable appProtocol field to allow users to specify the L7 protocol that Service supports. With the adoption of KEP 3726, Kubernetes now supports three new appProtocol values:

kubernetes.io/h2c

HTTP/2 over cleartext as described in RFC7540

kubernetes.io/ws

WebSocket over cleartext as described in RFC6445

kubernetes.io/wss

WebSocket over TLS as described in RFC6445

With Gateway API 1.2.0, support for honoring appProtocol is now Standard. For example, given the following Service:

apiVersion: v1 kind: Service metadata: name: websocket-service namespace: my-namespace spec: selector: app.kubernetes.io/name: websocket-app ports:

  • name: http port: 80 targetPort: 9376 protocol: TCP appProtocol: kubernetes.io/ws

then an HTTPRoute that includes this Service as a backendRef will automatically upgrade the connection to use WebSockets rather than assuming that the connection is pure HTTP.

For more information, check out GEP-1911.

New additions to experimental channel

Named rules for *Route resources

The rules field in HTTPRoute and GRPCRoute resources can now be named, in order to make it easier to reference the specific rule, for example:

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: multi-color-route namespace: faces spec: parentRefs:

  • name: my-gateway kind: Gateway port: 80 rules:
  • name: center-rule matches:
  • path: type: PathPrefix value: /color/center backendRefs:
  • name: color-center port: 80
  • name: edge-rule matches:
  • path: type: PathPrefix value: /color/edge backendRefs:
  • name: color-edge port: 80

Logging or status messages can now refer to these two rules as center-rule or edge-rule instead of being forced to refer to them by index. For more information, see GEP-995.

HTTPRoute retry support

Gateway API 1.2.0 introduces experimental support for counted HTTPRoute retries. For example, the following HTTPRoute configuration retries requests to the /face path up to 3 times with a 500ms delay between retries:

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: face-with-retries namespace: faces spec: parentRefs:

  • name: my-gateway kind: Gateway port: 80 rules:
  • matches:
  • path: type: PathPrefix value: /face backendRefs:
  • name: face port: 80 retry: codes: [ 500, 502, 503, 504 ] attempts: 3 backoff: 500ms

For more information, check out GEP 1731.

HTTPRoute percentage-based mirroring

Gateway API has long supported the Request Mirroring feature, which allows sending the same request to multiple backends. In Gateway API 1.2.0, we're introducing percentage-based mirroring, which allows you to specify a percentage of requests to mirror to a different backend. For example, the following HTTPRoute configuration mirrors 42% of requests to the color-mirror backend:

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: color-mirror-route namespace: faces spec: parentRefs:

  • name: mirror-gateway hostnames:
  • mirror.example rules:
  • backendRefs:
  • name: color port: 80 filters:
  • type: RequestMirror requestMirror: backendRef: name: color-mirror port: 80 percent: 42 # This value must be an integer.

There's also a fraction stanza which can be used in place of percent, to allow for more precise control over exactly what amount of traffic is mirrored, for example:

... filters:

  • type: RequestMirror requestMirror: backendRef: name: color-mirror port: 80 fraction: numerator: 1 denominator: 10000

This configuration mirrors 1 in 10,000 requests to the color-mirror backend, which may be relevant with very high request rates. For more details, see GEP-1731.

Additional backend TLS configuration

This release includes three additions related to TLS configuration for communications between a Gateway and a workload (a backend):

A new backendTLS field on Gateway

This new field allows you to specify the client certificate that a Gateway should use when connecting to backends.

A new subjectAltNames field on BackendTLSPolicy

Previously, the hostname field was used to configure both the SNI that a Gateway should send to a backend and the identity that should be provided by a certificate. When the new subjectAltNames field is specified, any certificate matching at least one of the specified SANs will be considered valid. This is particularly critical for SPIFFE where URI-based SANs may not be valid SNIs.

A new options field on BackendTLSPolicy

Similar to the TLS options field on Gateway Listeners, we believe the same concept will be broadly useful for TLS-specific configuration for Backend TLS.

For more information, check out GEP-3135.

More changes

For a full list of the changes included in this release, please refer to the v1.2.0 release notes.

Project updates

Beyond the technical, the v1.2 release also marks a few milestones in the life of the Gateway API project itself.

Release process improvements

Gateway API has never been intended to be a static API, and as more projects use it as a component to build on, it's become clear that we need to bring some more predictability to Gateway API releases. To that end, we're pleased - and a little ne

·kubernetes.io·
Gateway API v1.2: WebSockets Timeouts Retries and More
LICENSE.TXT
LICENSE.TXT

LICENSE.TXT

AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new featuresNFL Sunday Ticket© 2024 Google LLC

November 21, 2024 at 09:23AM

via Instapaper

·youtube.com·
LICENSE.TXT
Metrics logs traces and mayhem: introducing an observability adventure game powered by Grafana Alloy and OTel | Grafana Labs
Metrics logs traces and mayhem: introducing an observability adventure game powered by Grafana Alloy and OTel | Grafana Labs

Metrics, logs, traces, and mayhem: introducing an observability adventure game powered by Grafana Alloy and OTel | Grafana Labs

The actually useful free plan Grafana, of course14 day retention10k series Prometheus metrics500 VUh k6 testing50 GB logs, traces, and profiles50k frontend…

November 21, 2024 at 06:13AM

via Instapaper

·grafana.com·
Metrics logs traces and mayhem: introducing an observability adventure game powered by Grafana Alloy and OTel | Grafana Labs
How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack
How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack

How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack

https://kubernetes.io/blog/2024/11/21/dynamic-kubernetes-api-server-for-cozystack/

Hi there! I'm Andrei Kvapil, but you might know me as @kvaps in communities dedicated to Kubernetes and cloud-native tools. In this article, I want to share how we implemented our own extension api-server in the open-source PaaS platform, Cozystack.

Kubernetes truly amazes me with its powerful extensibility features. You're probably already familiar with the controller concept and frameworks like kubebuilder and operator-sdk that help you implement it. In a nutshell, they allow you to extend your Kubernetes cluster by defining custom resources (CRDs) and writing additional controllers that handle your business logic for reconciling and managing these kinds of resources. This approach is well-documented, with a wealth of information available online on how to develop your own operators.

However, this is not the only way to extend the Kubernetes API. For more complex scenarios such as implementing imperative logic, managing subresources, and dynamically generating responses—the Kubernetes API aggregation layer provides an effective alternative. Through the aggregation layer, you can develop a custom extension API server and seamlessly integrate it within the broader Kubernetes API framework.

In this article, I will explore the API aggregation layer, the types of challenges it is well-suited to address, cases where it may be less appropriate, and how we utilized this model to implement our own extension API server in Cozystack.

What Is the API Aggregation Layer?

First, let's get definitions straight to avoid any confusion down the road. The API aggregation layer is a feature in Kubernetes, while an extension api-server is a specific implementation of an API server for the aggregation layer. An extension API server is just like the standard Kubernetes API server, except it runs separately and handles requests for your specific resource types.

So, the aggregation layer lets you write your own extension API server, integrate it easily into Kubernetes, and directly process requests for resources in a certain group. Unlike the CRD mechanism, the extension API is registered in Kubernetes as an APIService, telling Kubernetes to consider this new API server and acknowledge that it serves certain APIs.

You can execute this command to list all registered apiservices:

kubectl get apiservices.apiregistration.k8s.io

Example APIService:

NAME SERVICE AVAILABLE AGE v1alpha1.apps.cozystack.io cozy-system/cozystack-api True 7h29m

As soon as the Kubernetes api-server receives requests for resources in the group v1alpha1.apps.cozystack.io, it redirects all those requests to our extension api-server, which can handle them based on the business logic we've built into it.

When to use the API Aggregation Layer

The API Aggregation Layer helps solve several issues where the usual CRD mechanism might not enough. Let's break them down.

Imperative Logic and Subresources

Besides regular resources, Kubernetes also has something called subresources.

In Kubernetes, subresources are additional actions or operations you can perform on primary resources (like Pods, Deployments, Services) via the Kubernetes API. They provide interfaces to manage specific aspects of resources without affecting the entire object.

A simple example is status, which is traditionally exposed as a separate subresource that you can access independently from the parent object. The status field isn't meant to be changed

But beyond /status, Pods in Kubernetes also have subresources like /exec, /portforward, and /log. Interestingly, instead of the usual declarative resources in Kubernetes, these represent endpoints for imperative operations like viewing logs, proxying connections, executing commands in a running container, and so on.

To support such imperative commands on your own API, you need implement an extension API and an extension API server. Here are some well-known examples:

KubeVirt: An add-on for Kubernetes that extends its API capabilities to run traditional virtual machines. The extension api-server created as part of KubeVirt handles subresources like /restart, /console, and /vnc for virtual machines.

Knative: A Kubernetes add-on that extends its capabilities for serverless computing, implementing the /scale subresource to set up autoscaling for its resource types.

By the way, even though subresource logic in Kubernetes can be imperative, you can manage access to them declaratively using Kubernetes standard RBAC model.

For example this way you can control access to the /log and /exec subresources of the Pod kind:

kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-and-pod-logs-reader rules:

  • apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list"]
  • apiGroups: [""] resources: ["pods/exec"] verbs: ["create"]

You're not tied to use etcd

Usually, the Kubernetes API server uses etcd for its backend. However, implementing your own API server doesn't lock you into using only etcd. If it doesn't make sense to store your server's state in etcd, you can store information in any other system and generate responses on the fly. Here are a few cases to illustrate:

metrics-server is a standard extension for Kubernetes which allows you to view real-time metrics of your nodes and pods. It defines alternative Pod and Node kinds in its own metrics.k8s.io API. Requests to these resources are translated into metrics directly from Kubelet. So when you run kubectl top node or kubectl top pod, metrics-server fetches metrics from cAdvisor in real-time. It then returns these metrics to you. Since the information is generated in real-time and is only relevant at the moment of the request, there is no need to store it in etcd. This approach saves resources.

If needed, you can use a backend other than etcd. You can even implement a Kubernetes-compatible API for it. For example, if you use Postgres, you can create a transparent representation of its entities in the Kubernetes API. Eg. databases, users, and grants within Postgres would appear as regular Kubernetes resources, thanks to your extension API server. You could manage them using kubectl or any other Kubernetes-compatible tool. Unlike controllers, which implement business logic using custom resources and reconciliation methods, an extension API server eliminates the need for separate controllers for every kind. This means you don't have to sync state between the Kubernetes API and your backend.

One-Time resources

Kubernetes has a special API used to provide users with information about their permissions. This is implemented using the SelfSubjectAccessReview API. One unusual detail of these resources is that you can't view them using get or list verbs. You can only create them (using the create verb) and receive output with information about what you have access to at that moment.

If you try to run kubectl get selfsubjectaccessreviews directly, you'll just get an error like this:

Error from server (MethodNotAllowed): the server does not allow this method on the requested resource

The reason is that the Kubernetes API server doesn't support any other interaction with this type of resource (you can only CREATE them).

The SelfSubjectAccessReview API supports commands such as:

kubectl auth can-i create deployments --namespace dev

When you run the command above, kubectl creates a SelfSubjectAccessReview using the Kubernetes API. This allows Kubernetes to fetch a list of possible permissions for your user. Kubernetes then generates a personalized response to your request in real-time. This logic is different from a scenario where this resource is simply stored in etcd.

Similarly, in KubeVirt's CDI (Containerized Data Importer) extension, which allows file uploads into a PVC from a local machine using the virtctl tool, a special token is required before the upload process begins. This token is generated by creating an UploadTokenRequest resource via the Kubernetes API. Kubernetes routes (proxies) all UploadTokenRequest resource creation requests to the CDI extension API server, which generates and returns the token in response.

Full control over conversion, validation, and output formatting

Your own API server can have all the capabilities of the vanilla Kubernetes API server. The resources you create in your API server can be validated immediately on the server side without additional webhooks. While CRDs also support server-side validation using Common Expression Language (CEL) for declarative validation and ValidatingAdmissionPolicies without the need for webhooks, a custom API server allows for more complex and tailored validation logic if needed.

Kubernetes allows you to serve multiple API versions for each resource type, traditionally v1alpha1, v1beta1 and v1. Only one version can be specified as the storage version. All requests to other versions must be automatically converted to the version specified as storage version. With CRDs, this mechanism is implemented using conversion webhooks. Whereas in an extension API server, you can implement your own conversion mechanism, choose to mix up different storage versions (one object might be serialized as v1, another as v2), or rely on an external backing API.

Directly implementing the Kubernetes API lets you format table output however you like and doesn't force you to follow the additionalPrinterColumns logic in CRDs. Instead, you can write your own formatter that formats the table output and custom fields in it. For example, when using additionalPrinterColumns, you can display field values only following the JSONPath logic. In your own API server, you can generate and insert values on the fly, formatting the table output as you wish.

Dynamic resource registration

The resources served by an extension api-server don't

·kubernetes.io·
How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack
Exclusive: GitHub launches $1.25M open source fund with a focus on security
Exclusive: GitHub launches $1.25M open source fund with a focus on security

Exclusive: GitHub launches $1.25M open source fund with a focus on security

The open source funding problem is very real, but a slew of initiatives have emerged of late, with startups, corporations, and venture capitalists launching…

November 20, 2024 at 10:55AM

via Instapaper

·techcrunch.com·
Exclusive: GitHub launches $1.25M open source fund with a focus on security
OCSF Joins the Linux Foundation: Accelerating the Standardization of Cybersecurity Data | Amazon Web Services
OCSF Joins the Linux Foundation: Accelerating the Standardization of Cybersecurity Data | Amazon Web Services

OCSF Joins the Linux Foundation: Accelerating the Standardization of Cybersecurity Data | Amazon Web Services

In the ever-evolving landscape of cybersecurity, the need for efficient, standardized ways to manage and analyze security data has never been more critical.…

November 19, 2024 at 02:47PM

via Instapaper

·aws.amazon.com·
OCSF Joins the Linux Foundation: Accelerating the Standardization of Cybersecurity Data | Amazon Web Services
Easy Amazon Deals! "Use 1 Point" on Black Friday Shopping
Easy Amazon Deals! "Use 1 Point" on Black Friday Shopping

Easy Amazon Deals! "Use 1 Point" on Black Friday Shopping

While we never suggest redeeming your loyalty rewards points at Amazon, there are many times throughout the year where if you redeem just a single point, you’ll…

November 19, 2024 at 01:48PM

via Instapaper

·travelfreely.com·
Easy Amazon Deals! "Use 1 Point" on Black Friday Shopping
DevOps Toolkit - Grand Finale - End to End Demo of the Observability Chosen Tech (You Choose! Ch. 4 Ep. 10) - https://www.youtube.com/watch?v=EpbT-kfD6kw
DevOps Toolkit - Grand Finale - End to End Demo of the Observability Chosen Tech (You Choose! Ch. 4 Ep. 10) - https://www.youtube.com/watch?v=EpbT-kfD6kw

Grand Finale - End to End Demo of the Observability Chosen Tech (You Choose!, Ch. 4, Ep. 10)

Choose Your Own Adventure: The Observability Odyssey - Grand Finale.

In this episode, we'll go through all the choices you made in this season.

This and all other episodes are available at https://www.youtube.com/playlist?list=PLyicRj904Z9-FzCPvGpVHgRQVYJpVmx3Z.

More information about the "Choose Your Own Adventure" project including the source code and links to all the videos can be found at https://github.com/vfarcic/cncf-demo.

٩( ᐛ )و Whitney's YouTube Channel → https://www.youtube.com/@wiggitywhitney

via YouTube https://www.youtube.com/watch?v=EpbT-kfD6kw

·youtube.com·
DevOps Toolkit - Grand Finale - End to End Demo of the Observability Chosen Tech (You Choose! Ch. 4 Ep. 10) - https://www.youtube.com/watch?v=EpbT-kfD6kw
Rebuilding my homelab: suffering as service with Xe iaso
Rebuilding my homelab: suffering as service with Xe iaso

Rebuilding my homelab: suffering as service, with Xe iaso

https://kube.fm/rebuilding-homelab-xe

Xe Iaso shares their journey in building a "compute as a faucet" home lab where infrastructure becomes invisible and tasks can be executed without manual intervention. The discussion covers everything from operating system selection to storage architecture and secure access patterns.

You will learn:

How to evaluate operating systems for your home lab — from Rocky Linux to Talos Linux, and why minimal, immutable operating systems are gaining traction.

How to implement a three-tier storage strategy combining Longhorn (replicated storage), NFS (bulk storage), and S3 (cloud storage) to handle different workload requirements.

How to secure your home lab with certificate-based authentication, WireGuard VPN, and proper DNS configuration while protecting your home IP address.

Sponsor

This episode is sponsored by Nutanix — innovate faster with a complete and open cloud-native stack for all your apps and data anywhere.

More info

Find all the links and info for this episode here: https://kube.fm/rebuilding-homelab-xe

Interested in sponsoring an episode? Learn more.

via KubeFM https://kube.fm

November 19, 2024 at 05:00AM

·kube.fm·
Rebuilding my homelab: suffering as service with Xe iaso
Traceroute Isn't Real
Traceroute Isn't Real
Almost nobody understands how traceroute works. Worse, it's not a real tool
·gekk.info·
Traceroute Isn't Real
Linux Kernel 6.12 Has Landed – And It's a Big One
Linux Kernel 6.12 Has Landed – And It's a Big One
Linux kernel 6.12 is one of the most significant releases of the year, delivering a feature nearly 20 years in the making: true real-time computing.
·omgubuntu.co.uk·
Linux Kernel 6.12 Has Landed – And It's a Big One