Suggested Reads

Suggested Reads

54794 bookmarks
Newest
DevOps Toolkit - Debunking Myths and Simplifying Compositions with Crossplane v2 - https://www.youtube.com/watch?v=ZQEVPnS3eeo
DevOps Toolkit - Debunking Myths and Simplifying Compositions with Crossplane v2 - https://www.youtube.com/watch?v=ZQEVPnS3eeo

Debunking Myths and Simplifying Compositions with Crossplane v2

In this video, we debunk common misconceptions about Crossplane and introduce new features in Crossplane v2 that make it easier to use for both infrastructure and application management. Learn how Crossplane Compositions now support direct resource composition without Managed Resources, simplifying the process. We'll also explore the flexibility of using any programming language to define resources, making Crossplane more versatile than ever. Follow along to see practical examples and understand how these updates can benefit your setup.

crossplane #kubernetes #platformengineering

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/internal-developer-platforms/debunking-myths-and-simplifying-compositions-with-crossplane-v2 🔗 Crossplane: https://crossplane.io 🎬 Crossplane Tutorial: https://youtube.com/playlist?list=PLyicRj904Z99i8U5JaNW5X3AyBvfQz-16

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction 01:29 Crossplane Is NOT (Only) About Infrastructure 05:39 With and Without Crossplane Providers 09:39 Any Language 10:49 Wrap-Up

via YouTube https://www.youtube.com/watch?v=ZQEVPnS3eeo

·youtube.com·
DevOps Toolkit - Debunking Myths and Simplifying Compositions with Crossplane v2 - https://www.youtube.com/watch?v=ZQEVPnS3eeo
Some mistakes I made as a new manager
Some mistakes I made as a new manager
the trough of zero dopamine • managing the wrong amount • procrastinating on hard questions • indefinitely deferring maintenance • angsting instead of asking
·benkuhn.net·
Some mistakes I made as a new manager
Use "but" strategically
Use "but" strategically
“But” is a negating word. Here’s how to use this intentionally, so you sound direct and positive.
·newsletter.weskao.com·
Use "but" strategically
Apple Needs a Snow Sequoia
Apple Needs a Snow Sequoia
The same year Apple launched the iPhone, it unveiled a massive upgrade to Mac OS X known as Leopard, sporting “300 New Features.” Two years later, it did something almost unheard of: it released Snow Leopard, an upgrade all about how little it added and how much it took away. Apple needs to make it snow again.
·reviews.ofb.biz·
Apple Needs a Snow Sequoia
Oracle customers confirm data stolen in alleged cloud breach is valid
Oracle customers confirm data stolen in alleged cloud breach is valid
Despite Oracle denying a breach of its Oracle Cloud federated SSO login servers and the theft of account data for 6 million people, BleepingComputer has confirmed with multiple companies that associated data samples shared by the threat actor are valid.
·bleepingcomputer.com·
Oracle customers confirm data stolen in alleged cloud breach is valid
Notes on MCP
Notes on MCP
I’ve been playing with Anthropic’s MCP for a little while now, and I have a few gripes.
·taoofmac.com·
Notes on MCP
SignalGate Isn’t About Signal
SignalGate Isn’t About Signal
The Trump cabinet’s shocking leak of its plans to bomb Yemen raises myriad confidentiality and legal issues. The security of the encrypted messaging app Signal is not one of them.
·wired.com·
SignalGate Isn’t About Signal
Kubernetes v1.33 sneak peek
Kubernetes v1.33 sneak peek

Kubernetes v1.33 sneak peek

https://kubernetes.io/blog/2025/03/26/kubernetes-v1-33-upcoming-changes/

As the release of Kubernetes v1.33 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the overall health of the project. This blog post outlines some planned changes for the v1.33 release, which the release team believes you should be aware of to ensure the continued smooth operation of your Kubernetes environment and to keep you up-to-date with the latest developments. The information below is based on the current status of the v1.33 release and is subject to change before the final release date.

The Kubernetes API removal and deprecation process

The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release. It will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.

Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.

Beta or pre-release API versions must be supported for 3 releases after the deprecation.

Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.

Whether an API is removed as a result of a feature graduating from beta to stable, or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the deprecation guide.

Deprecations and removals for Kubernetes v1.33

Deprecation of the stable Endpoints API

The EndpointSlices API has been stable since v1.21, which effectively replaced the original Endpoints API. While the original Endpoints API was simple and straightforward, it also posed some challenges when scaling to large numbers of network endpoints. The EndpointSlices API has introduced new features such as dual-stack networking, making the original Endpoints API ready for deprecation.

This deprecation only impacts those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. There will be a dedicated blog post with more details on the deprecation implications and migration plans in the coming weeks.

You can find more in KEP-4974: Deprecate v1.Endpoints.

Removal of kube-proxy version information in node status

Following its deprecation in v1.31, as highlighted in the release announcement, the status.nodeInfo.kubeProxyVersion field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely.

You can find more in KEP-4004: Deprecate status.nodeInfo.kubeProxyVersion field.

Removal of host network support for Windows pods

Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use the Node’s networking namespace. The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours, and alternative solutions were available, the Kubernetes project has decided to withdraw the associated KEP. We're expecting to see support fully removed in v1.33.

You can find more in KEP-3503: Host network support for Windows pods.

Featured improvement of Kubernetes v1.33

As authors of this article, we picked one improvement as the most significant change to call out!

Support for user namespaces within Linux Pods

One of the oldest open KEPs today is KEP-127, Pod security improvement by using Linux User namespaces for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where the feature is available by default.

This support will not impact existing Pods unless you manually specify pod.spec.hostUsers to opt in. As highlighted in the v1.30 sneak peek blog, this is an important milestone for mitigating vulnerabilities.

You can find more in KEP-127: Support User Namespaces in pods.

Selected other Kubernetes v1.33 improvements

The following list of enhancements is likely to be included in the upcoming v1.33 release. This is not a commitment and the release content is subject to change.

In-place resource resize for vertical scaling of Pods

When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Before this enhancement, container resources defined in a Pod's spec were immutable, and updating any of these details within a Pod template would trigger Pod replacement.

But what if you could dynamically update the resource configuration for your existing Pods without restarting them?

The KEP-1287 is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the traffic is low, and even allocating larger resources during startup that is eventually reduced once the initial setup is complete. This was released as alpha in v1.27, and is expected to land as beta in v1.33.

You can find more in KEP-1287: In-Place Update of Pod Resources.

DRA’s ResourceClaim Device Status graduates to beta

The devices field in ResourceClaim status, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.

For example, reporting the interface name, MAC address, and IP addresses of network interfaces in the status of a ResourceClaim can significantly help in configuring and managing network services, as well as in debugging network related issues. You can read more about ResourceClaim Device Status in Dynamic Resource Allocation: ResourceClaim Device Status document.

Also, you can find more about the planned enhancement in KEP-4817: DRA: Resource Claim Status with possible standardized network interface data.

Ordered namespace deletion

This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions.

You can find more in KEP-5080: Ordered namespace deletion.

Enhancements for indexed job management

These two KEPs are both set to graduate to GA to provide better reliability for job handling, specifically for indexed jobs. KEP-3850 provides per-index backoff limits for indexed jobs, which allows each index to be fully independent of other indexes. Also, KEP-3998 extends Job API to define conditions for making an indexed job as successfully completed when not all indexes are succeeded.

You can find more in KEP-3850: Backoff Limit Per Index For Indexed Jobs and KEP-3998: Job success/completion policy.

Want to know more?

New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what's new in Kubernetes v1.33 as part of the CHANGELOG for that release.

Kubernetes v1.33 release is planned for Wednesday, 23rd April, 2025. Stay tuned for updates!

You can also see the announcements of changes in the release notes for:

Kubernetes v1.32

Kubernetes v1.31

Kubernetes v1.30

Get involved

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below. Thank you for your continued feedback and support.

Follow us on Bluesky @kubernetes.io for the latest updates

Join the community discussion on Discuss

Join the community on Slack

Post questions (or answer questions) on Server Fault or Stack Overflow

Share your Kubernetes story

Read more about what’s happening with Kubernetes on the blog

Learn more about the Kubernetes Release Team

via Kubernetes Blog https://kubernetes.io/

March 26, 2025 at 02:30PM

·kubernetes.io·
Kubernetes v1.33 sneak peek
DevOps Toolkit - Ep16 - Ask Me Anything About DevOps Cloud Kubernetes Platform Engineering... w/Scott Rosenberg - https://www.youtube.com/watch?v=S3KRG64g5Eo
DevOps Toolkit - Ep16 - Ask Me Anything About DevOps Cloud Kubernetes Platform Engineering... w/Scott Rosenberg - https://www.youtube.com/watch?v=S3KRG64g5Eo

Ep16 - Ask Me Anything About DevOps, Cloud, Kubernetes, Platform Engineering,... w/Scott Rosenberg

There are no restrictions in this AMA session. You can ask anything about DevOps, Cloud, Kubernetes, Platform Engineering, containers, or anything else. We'll have a special guest Scott Rosenberg to help us out.

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Codefresh 🔗 GitOps Argo CD Certifications: https://learning.codefresh.io (use "viktor" for a 50% discount) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/5055462645956608

via YouTube https://www.youtube.com/watch?v=S3KRG64g5Eo

·youtube.com·
DevOps Toolkit - Ep16 - Ask Me Anything About DevOps Cloud Kubernetes Platform Engineering... w/Scott Rosenberg - https://www.youtube.com/watch?v=S3KRG64g5Eo
Webinar: Fuzzball 101
Webinar: Fuzzball 101
CIQ Fuzzball provides researchers a powerful API to automate the provisioning and management of the necessary infrastructure to run their workloads. It also delivers a powerful UI to help define these workflows.
·info.ciq.com·
Webinar: Fuzzball 101
Open Source Survival Guide
Open Source Survival Guide

Open Source Survival Guide

https://chrisshort.net/abstracts/open-source-survival-guide/

Open Source Survival Guide provides practical rules for navigating the open source ecosystem. Learn how to balance community collaboration with business goals, contribute effectively, build trust, and maintain your sanity in the complex world of open source software development.

via Chris Short https://chrisshort.net/

March 26, 2025

·chrisshort.net·
Open Source Survival Guide
Fresh Swap Features for Linux Users in Kubernetes 1.32
Fresh Swap Features for Linux Users in Kubernetes 1.32

Fresh Swap Features for Linux Users in Kubernetes 1.32

https://kubernetes.io/blog/2025/03/25/swap-linux-improvements/

Swap is a fundamental and an invaluable Linux feature. It offers numerous benefits, such as effectively increasing a node’s memory by swapping out unused data, shielding nodes from system-level memory spikes, preventing Pods from crashing when they hit their memory limits, and much more. As a result, the node special interest group within the Kubernetes project has invested significant effort into supporting swap on Linux nodes.

The 1.22 release introduced Alpha support for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis. Later, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many new improvements. In the following Kubernetes releases more improvements were made, paving the way to GA in the near future.

Prior to version 1.22, Kubernetes did not provide support for swap memory on Linux systems. This was due to the inherent difficulty in guaranteeing and accounting for pod memory utilization when swap memory was involved. As a result, swap support was deemed out of scope in the initial design of Kubernetes, and the default behavior of a kubelet was to fail to start if swap memory was detected on a node.

In version 1.22, the swap feature for Linux was initially introduced in its Alpha stage. This provided Linux users the opportunity to experiment with the swap feature for the first time. However, as an Alpha version, it was not fully developed and only partially worked on limited environments.

In version 1.28 swap support on Linux nodes was promoted to Beta. The Beta version was a drastic leap forward. Not only did it fix a large amount of bugs and made swap work in a stable way, but it also brought cgroup v2 support, introduced a wide variety of tests which include complex scenarios such as node-level pressure, and more. It also brought many exciting new capabilities such as the LimitedSwap behavior which sets an auto-calculated swap limit to containers, OpenMetrics instrumentation support (through the /metrics/resource endpoint) and Summary API for VerticalPodAutoscalers (through the /stats/summary endpoint), and more.

Today we are working on more improvements, paving the way for GA. Currently, the focus is especially towards ensuring node stability, enhanced debug abilities, addressing user feedback, polishing the feature and making it stable. For example, in order to increase stability, containers in high-priority pods cannot access swap which ensures the memory they need is ready to use. In addition, the UnlimitedSwap behavior was removed since it might compromise the node's health. Secret content protection against swapping has also been introduced (see relevant security-risk section for more info).

To conclude, compared to previous releases, the kubelet's support for running with swap enabled is more stable and robust, more user-friendly, and addresses many known shortcomings. That said, the NodeSwap feature introduces basic swap support, and this is just the beginning. In the near future, additional features are planned to enhance swap functionality in various ways, such as improving evictions, extending the API, increasing customizability, and more!

How do I use it?

In order for the kubelet to initialize on a swap-enabled node, the failSwapOn field must be set to false on kubelet's configuration setting, or the deprecated --fail-swap-on command line flag must be deactivated.

It is possible to configure the memorySwap.swapBehavior option to define the manner in which a node utilizes swap memory. For instance,

this fragment goes into the kubelet's configuration file

memorySwap: swapBehavior: LimitedSwap

The currently available configuration options for swapBehavior are:

NoSwap (default): Kubernetes workloads cannot use swap. However, processes outside of Kubernetes' scope, like system daemons (such as kubelet itself!) can utilize swap. This behavior is beneficial for protecting the node from system-level memory spikes, but it does not safeguard the workloads themselves from such spikes.

LimitedSwap: Kubernetes workloads can utilize swap memory, but with certain limitations. The amount of swap available to a Pod is determined automatically, based on the proportion of the memory requested relative to the node's total memory. Only non-high-priority Pods under the Burstable Quality of Service (QoS) tier are permitted to use swap. For more details, see the section below.

If configuration for memorySwap is not specified, by default the kubelet will apply the same behaviour as the NoSwap setting.

On Linux nodes, Kubernetes only supports running with swap enabled for hosts that use cgroup v2. On cgroup v1 systems, all Kubernetes workloads are not allowed to use swap memory.

Install a swap-enabled cluster with kubeadm

Before you begin

It is required for this demo that the kubeadm tool be installed, following the steps outlined in the kubeadm installation guide. If swap is already enabled on the node, cluster creation may proceed. If swap is not enabled, please refer to the provided instructions for enabling swap.

Create a swap file and turn swap on

I'll demonstrate creating 4GiB of swap, both in the encrypted and unencrypted case.

Setting up unencrypted swap

An unencrypted swap file can be set up as follows.

Allocate storage and restrict access

fallocate --length 4GiB /swapfile chmod 600 /swapfile

Format the swap space

mkswap /swapfile

Activate the swap space for paging

swapon /swapfile

Setting up encrypted swap

An encrypted swap file can be set up as follows. Bear in mind that this example uses the cryptsetup binary (which is available on most Linux distributions).

Allocate storage and restrict access

fallocate --length 4GiB /swapfile chmod 600 /swapfile

Create an encrypted device backed by the allocated storage

cryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap

Format the swap space

mkswap /dev/mapper/cryptswap

Activate the swap space for paging

swapon /dev/mapper/cryptswap

Verify that swap is enabled

Swap can be verified to be enabled with both swapon -s command or the free command

swapon -s Filename Type Size Used Priority /dev/dm-0 partition 4194300 0 -2

free -h total used free shared buff/cache available Mem: 3.8Gi 1.3Gi 249Mi 25Mi 2.5Gi 2.5Gi Swap: 4.0Gi 0B 4.0Gi

Enable swap on boot

After setting up swap, to start the swap file at boot time, you either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into /etc/fstab.

Set up a Kubernetes cluster that uses swap-enabled nodes

To make things clearer, here is an example kubeadm configuration file kubeadm-config.yaml for the swap enabled cluster.

--- apiVersion: "kubeadm.k8s.io/v1beta3" kind: InitConfiguration --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration failSwapOn: false memorySwap: swapBehavior: LimitedSwap

Then create a single-node cluster using kubeadm init --config kubeadm-config.yaml. During init, there is a warning that swap is enabled on the node and in case the kubelet failSwapOn is set to true. We plan to remove this warning in a future release.

How is the swap limit being determined with LimitedSwap?

The configuration of swap memory, including its limitations, presents a significant challenge. Not only is it prone to misconfiguration, but as a system-level property, any misconfiguration could potentially compromise the entire node rather than just a specific workload. To mitigate this risk and ensure the health of the node, we have implemented Swap with automatic configuration of limitations.

With LimitedSwap, Pods that do not fall under the Burstable QoS classification (i.e. BestEffort/Guaranteed QoS Pods) are prohibited from utilizing swap memory. BestEffort QoS Pods exhibit unpredictable memory consumption patterns and lack information regarding their memory usage, making it difficult to determine a safe allocation of swap memory. Conversely, Guaranteed QoS Pods are typically employed for applications that rely on the precise allocation of resources specified by the workload, with memory being immediately available. To maintain the aforementioned security and node health guarantees, these Pods are not permitted to use swap memory when LimitedSwap is in effect. In addition, high-priority pods are not permitted to use swap in order to ensure the memory they consume always residents on disk, hence ready to use.

Prior to detailing the calculation of the swap limit, it is necessary to define the following terms:

nodeTotalMemory: The total amount of physical memory available on the node.

totalPodsSwapAvailable: The total amount of swap memory on the node that is available for use by Pods (some swap memory may be reserved for system use).

containerMemoryRequest: The container's memory request.

Swap limitation is configured as: (containerMemoryRequest / nodeTotalMemory) × totalPodsSwapAvailable

In other words, the amount of swap that a container is able to use is proportionate to its memory request, the node's total physical memory and the total amount of swap memory on the node that is available for use by Pods.

It is important to note that, for containers within Burstable QoS Pods, it is possible to opt-out of swap usage by specifying memory requests that are equal to memory limits. Containers configured in this manner will not have access to swap memory.

How does it work?

There are a number of possible ways that one could envision swap use on a node. When swap is already provisioned and available on a node, the kubelet is able to be configured so that:

It can start with swap on.

It will direct the Container Runtime Interface to allocate zero swap memory to Kubernetes workloads by default.

Swap configuration on a node is exposed to a clust

·kubernetes.io·
Fresh Swap Features for Linux Users in Kubernetes 1.32