1_r/devopsish

1_r/devopsish

54589 bookmarks
Custom sorting
Fora short period of time this was actually a decent source of information, that clearly has passed | BuzzFeed News Is Shutting Down, Company Laying Off 180 Staffers
Fora short period of time this was actually a decent source of information, that clearly has passed | BuzzFeed News Is Shutting Down, Company Laying Off 180 Staffers
BuzzFeed is shutting down BuzzFeed News because it is not able to turn a profit, according to a memo CEO Jonah Peretti sent to company staff Thursday. The digital publisher is laying off 15% of its…
·variety.com·
Fora short period of time this was actually a decent source of information, that clearly has passed | BuzzFeed News Is Shutting Down, Company Laying Off 180 Staffers
The Silent Killer of Your Operating Practice: Fear
The Silent Killer of Your Operating Practice: Fear
Amanda Schwartz Ramirez, former PayPal strategy leader and now COO advisor for startups, shares the 5 biggest fears that can derail your company's strategic planning sessions (and tactical advice for how to sidestep them).
·review.firstround.com·
The Silent Killer of Your Operating Practice: Fear
SLSA v1.0 is now final!
SLSA v1.0 is now final!
After almost two years since SLSA’s initial preview release, we are pleased to announce our first official stable version, SLSA v1.0! The full announcement can be found at the OpenSSF press release, and a description of changes can be found at What’s new in v1.0. Thank you to all members of the SLSA community who made this possible through your feedback, suggestions, discussions, and pull requests!
·slsa.dev·
SLSA v1.0 is now final!
Blog: Kubernetes 1.27: Query Node Logs Using The Kubelet API
Blog: Kubernetes 1.27: Query Node Logs Using The Kubelet API
Author: Aravindh Puthiyaparambil (Red Hat) Kubernetes 1.27 introduced a new feature called Node log query that allows viewing logs of services running on the node. What problem does it solve? Cluster administrators face issues when debugging malfunctioning services running on the node. They usually have to SSH or RDP into the node to view the logs of the service to debug the issue. The Node log query feature helps with this scenario by allowing the cluster administrator to view the logs using kubectl . This is especially useful with Windows nodes where you run into the issue of the node going to the ready state but containers not coming up due to CNI misconfigurations and other issues that are not easily identifiable by looking at the Pod status. How does it work? The kubelet already has a /var/log/ viewer that is accessible via the node proxy endpoint. The feature supplements this endpoint with a shim that shells out to journalctl , on Linux nodes, and the Get-WinEvent cmdlet on Windows nodes. It then uses the existing filters provided by the commands to allow filtering the logs. The kubelet also uses heuristics to retrieve the logs. If the user is not aware if a given system services logs to a file or to the native system logger, the heuristics first checks the native operating system logger and if that is not available it attempts to retrieve the first logs from /var/log/servicename or /var/log/servicename.log or /var/log/servicename/servicename.log . On Linux we assume that service logs are available via journald, and that journalctl is installed. On Windows we assume that service logs are available in the application log provider. Also note that fetching node logs is only available if you are authorized to do so (in RBAC, that's get and create access to nodes/proxy ). The privileges that you need to fetch node logs also allow elevation-of-privilege attacks, so be careful about how you manage them. How do I use it? To use the feature, ensure that the NodeLogQuery feature gate is enabled for that node, and that the kubelet configuration options enableSystemLogHandler and enableSystemLogQuery are both set to true. You can then query the logs from all your nodes or just a subset. Here is an example to retrieve the kubelet service logs from a node: # Fetch kubelet logs from a node named node-1.example kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet" You can further filter the query to narrow down the results: # Fetch kubelet logs from a node named node-1.example that have the word "error" kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error" You can also fetch files from /var/log/ on a Linux node: kubectl get --raw "/api/v1/nodes/insert-node-name-here/proxy/logs/?query=/insert-log-file-name-here" You can read the documentation for all the available options. How do I help? Please use the feature and provide feedback by opening GitHub issues or reaching out to us on the #sig-windows channel on the Kubernetes Slack or the SIG Windows mailing list .
·kubernetes.io·
Blog: Kubernetes 1.27: Query Node Logs Using The Kubelet API
non aesthetic things on Twitter
non aesthetic things on Twitter
pic.twitter.com/G8dPZ5kRDC— non aesthetic things (@PicturesFoIder) April 19, 2023
·twitter.com·
non aesthetic things on Twitter
Blog: Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta
Blog: Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta
Author: Chris Henzie (Google) With the release of Kubernetes v1.27 the ReadWriteOncePod feature has graduated to beta. In this blog post, we'll take a closer look at this feature, what it does, and how it has evolved in the beta release. What is ReadWriteOncePod? ReadWriteOncePod is a new access mode for PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) introduced in Kubernetes v1.22. This access mode enables you to restrict volume access to a single pod in the cluster, ensuring that only one pod can write to the volume at a time. This can be particularly useful for stateful workloads that require single-writer access to storage. For more context on access modes and how ReadWriteOncePod works read What are access modes and why are they important? in the Introducing Single Pod Access Mode for PersistentVolumes article from 2021. Changes in the ReadWriteOncePod beta The ReadWriteOncePod beta adds support for scheduler preemption of pods using ReadWriteOncePod PVCs. Scheduler preemption allows higher-priority pods to preempt lower-priority pods, so that they can start running on the same node. With this release, pods using ReadWriteOncePod PVCs can also be preempted if a higher-priority pod requires the same PVC. How can I start using ReadWriteOncePod? With ReadWriteOncePod now in beta, it will be enabled by default in cluster versions v1.27 and beyond. Note that ReadWriteOncePod is only supported for CSI volumes . Before using this feature you will need to update the following CSI sidecars to these versions or greater: csi-provisioner:v3.0.0+ csi-attacher:v3.3.0+ csi-resizer:v1.3.0+ To start using ReadWriteOncePod, create a PVC with the ReadWriteOncePod access mode: kind : PersistentVolumeClaim apiVersion : v1 metadata : name : single-writer-only spec : accessModes : - ReadWriteOncePod # Allow only a single pod to access single-writer-only. resources : requests : storage : 1Gi If your storage plugin supports dynamic provisioning , new PersistentVolumes will be created with the ReadWriteOncePod access mode applied. Read Migrating existing PersistentVolumes for details on migrating existing volumes to use ReadWriteOncePod. How can I learn more? Please see the alpha blog post and KEP-2485 for more details on the ReadWriteOncePod access mode and motivations for CSI spec changes. How do I get involved? The Kubernetes #csi Slack channel and any of the standard SIG Storage communication channels are great mediums to reach out to the SIG Storage and the CSI teams. Special thanks to the following people whose thoughtful reviews and feedback helped shape this feature: Abdullah Gharaibeh (ahg-g) Aldo Culquicondor (alculquicondor) Antonio Ojea (aojea) David Eads (deads2k) Jan Šafránek (jsafrane) Joe Betz (jpbetz) Kante Yin (kerthcet) Michelle Au (msau42) Tim Bannister (sftim) Xing Yang (xing-yang) If you’re interested in getting involved with the design and development of CSI or any part of the Kubernetes storage system, join the Kubernetes Storage Special Interest Group (SIG). We’re rapidly growing and always welcome new contributors.
·kubernetes.io·
Blog: Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta
Announcing Fedora Linux 38 - Fedora Magazine
Announcing Fedora Linux 38 - Fedora Magazine
Today I’m excited to share the results of the hard work of thousands of Fedora Project contributors: the Fedora Linux 38 release is here! With this release, we’re starting a new on-time streak. In fact, we’re ready a week early! As always, you should make sure your system is fully up-to-date before upgrading from a […]
·fedoramagazine.org·
Announcing Fedora Linux 38 - Fedora Magazine
My home phone costs 85 cents a month
My home phone costs 85 cents a month
After trying several services for home phones, I found a solution that costs me about $0.85 per month. ️️☎️
·major.io·
My home phone costs 85 cents a month
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Author: Jan Šafránek (Red Hat) The problem On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally the container runtime that applies SELinux labels to a Pod and all its volumes. Kubernetes only passes the SELinux label from a Pod's securityContext fields to the container runtime. The container runtime then recursively changes SELinux label on all files that are visible to the Pod's containers. This can be time-consuming if there are many files on the volume, especially when the volume is on a remote filesystem. Note If a container uses subPath of a volume, only that subPath of the whole volume is relabeled. This allows two pods that have two different SELinux labels to use the same volume, as long as they use different subpaths of it. If a Pod does not have any SELinux label assigned in Kubernetes API, the container runtime assigns a unique random one, so a process that potentially escapes the container boundary cannot access data of any other container on the host. The container runtime still recursively relabels all pod volumes with this random SELinux label. Improvement using mount options If a Pod and its volume meet all of the following conditions, Kubernetes will mount the volume directly with the right SELinux label. Such mount will happen in a constant time and the container runtime will not need to recursively relabel any files on it. The operating system must support SELinux. Without SELinux support detected, kubelet and the container runtime do not do anything with regard to SELinux. The feature gates ReadWriteOncePod and SELinuxMountReadWriteOncePod must be enabled. These feature gates are Beta in Kubernetes 1.27 and Alpha in 1.25. With any of these feature gates disabled, SELinux labels will be always applied by the container runtime by a recursive walk through the volume (or its subPaths). The Pod must have at least seLinuxOptions.level assigned in its Pod Security Context or all Pod containers must have it set in their Security Contexts . Kubernetes will read the default user , role and type from the operating system defaults (typically system_u , system_r and container_t ). Without Kubernetes knowing at least the SELinux level , the container runtime will assign a random one after the volumes are mounted. The container runtime will still relabel the volumes recursively in that case. The volume must be a Persistent Volume with Access Mode ReadWriteOncePod . This is a limitation of the initial implementation. As described above, two Pods can have a different SELinux label and still use the same volume, as long as they use a different subPath of it. This use case is not possible when the volumes are mounted with the SELinux label, because the whole volume is mounted and most filesystems don't support mounting a single volume multiple times with multiple SELinux labels. If running two Pods with two different SELinux contexts and using different subPaths of the same volume is necessary in your deployments, please comment in the KEP issue (or upvote any existing comment - it's best not to duplicate). Such pods may not run when the feature is extended to cover all volume access modes. The volume plugin or the CSI driver responsible for the volume supports mounting with SELinux mount options. These in-tree volume plugins support mounting with SELinux mount options: fc , iscsi , and rbd . CSI drivers that support mounting with SELinux mount options must announce that in their CSIDriver instance by setting seLinuxMount field. Volumes managed by other volume plugins or CSI drivers that don't set seLinuxMount: true will be recursively relabelled by the container runtime. Mounting with SELinux context When all aforementioned conditions are met, kubelet will pass -o context=SELinux label mount option to the volume plugin or CSI driver. CSI driver vendors must ensure that this mount option is supported by their CSI driver and, if necessary, the CSI driver appends other mount options that are needed for -o context to work. For example, NFS may need -o context=SELinux label,nosharecache , so each volume mounted from the same NFS server can have a different SELinux label value. Similarly, CIFS may need -o context=SELinux label,nosharesock . It's up to the CSI driver vendor to test their CSI driver in a SELinux enabled environment before setting seLinuxMount: true in the CSIDriver instance. How can I learn more? SELinux in containers: see excellent visual SELinux guide by Daniel J Walsh. Note that the guide is older than Kubernetes, it describes Multi-Category Security (MCS) mode using virtual machines as an example, however, a similar concept is used for containers. See a series of blog posts for details how exactly SELinux is applied to containers by container runtimes: How SELinux separates containers using Multi-Level Security Why you should be using Multi-Category Security for your Linux containers Read the KEP: Speed up SELinux volume relabeling using mounts
·kubernetes.io·
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Nine more US states join federal lawsuit against Google over ad tech
Nine more US states join federal lawsuit against Google over ad tech
Nine states, including Michigan and Nebraska, have joined a U.S. Department of Justice lawsuit against Alphabet's Google which alleges the search and advertising company broke antitrust law in running its digital advertising business, the department said on Monday.
·reuters.com·
Nine more US states join federal lawsuit against Google over ad tech
Kubernetes Network Policies Explained
Kubernetes Network Policies Explained
What are Kubernetes Network Policies and how to use them? In this video, I'll show you how to use Kubernetes Network Policies to restrict access between Pods. I'll also show you the pros and cons of k8s Network Policies. #kubernetes #k8s #kubernetesnetworking Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Gist with the commands: https://gist.github.com/vfarcic/f67624c05df3d949c8d9a6976adb4631 🔗 Kubernetes Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies ▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below). ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/ ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction To Kubernetes Network Policies 04:07 What Are Kubernetes Network Policies? 06:12 Applications Without Network Policies 08:13 Kubernetes Network Policies In Action 15:20 Pros And Cons Of Kubernetes Network Policies
·youtube.com·
Kubernetes Network Policies Explained
How to Test Your Proxy Support in Docker
How to Test Your Proxy Support in Docker
This post explains and demonstrates how to ensure that when a proxy is configured, all outgoing requests are only ever made through the proxy (and test in Docker).
·appsmith.com·
How to Test Your Proxy Support in Docker
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta
Authors: Alex Wang (Shopee), Kante Yin (DaoCloud), Kensei Nakada (Mercari) In Kubernetes v1.19, Pod topology spread constraints went to general availability (GA). As time passed, we - SIG Scheduling - received feedback from users, and, as a result, we're actively working on improving the Topology Spread feature via three KEPs. All of these features have reached beta in Kubernetes v1.27 and are enabled by default. This blog post introduces each feature and the use case behind each of them. KEP-3022: min domains in Pod Topology Spread Pod Topology Spread has the maxSkew parameter to define the degree to which Pods may be unevenly distributed. But, there wasn't a way to control the number of domains over which we should spread. Some users want to force spreading Pods over a minimum number of domains, and if there aren't enough already present, make the cluster-autoscaler provision them. Kubernetes v1.24 introduced the minDomains parameter for pod topology spread constraints, as an alpha feature. Via minDomains parameter, you can define the minimum number of domains. For example, assume there are 3 Nodes with the enough capacity, and a newly created ReplicaSet has the following topologySpreadConstraints in its Pod template. ... topologySpreadConstraints : - maxSkew : 1 minDomains : 5 # requires 5 Nodes at least (because each Node has a unique hostname). whenUnsatisfiable : DoNotSchedule # minDomains is valid only when DoNotSchedule is used. topologyKey : kubernetes.io/hostname labelSelector : matchLabels : foo : bar In this case, 3 Pods will be scheduled to those 3 Nodes, but other 2 Pods from this replicaset will be unschedulable until more Nodes join the cluster. You can imagine that the cluster autoscaler provisions new Nodes based on these unschedulable Pods, and as a result, the replicas are finally spread over 5 Nodes. KEP-3094: Take taints/tolerations into consideration when calculating podTopologySpread skew Before this enhancement, when you deploy a pod with podTopologySpread configured, kube-scheduler would take the Nodes that satisfy the Pod's nodeAffinity and nodeSelector into consideration in filtering and scoring, but would not care about whether the node taints are tolerated by the incoming pod or not. This may lead to a node with untolerated taint as the only candidate for spreading, and as a result, the pod will stuck in Pending if it doesn't tolerate the taint. To allow more fine-gained decisions about which Nodes to account for when calculating spreading skew, Kubernetes 1.25 introduced two new fields within topologySpreadConstraints to define node inclusion policies: nodeAffinityPolicy and nodeTaintPolicy . A manifest that applies these policies looks like the following: apiVersion : v1 kind : Pod metadata : name : example-pod spec : # Configure a topology spread constraint topologySpreadConstraints : - maxSkew : integer # ... nodeAffinityPolicy : [Honor|Ignore] nodeTaintsPolicy : [Honor|Ignore] # other Pod fields go here The nodeAffinityPolicy field indicates how Kubernetes treats a Pod's nodeAffinity or nodeSelector for pod topology spreading. If Honor , kube-scheduler filters out nodes not matching nodeAffinity /nodeSelector in the calculation of spreading skew. If Ignore , all nodes will be included, regardless of whether they match the Pod's nodeAffinity /nodeSelector or not. For backwards compatibility, nodeAffinityPolicy defaults to Honor . The nodeTaintsPolicy field defines how Kubernetes considers node taints for pod topology spreading. If Honor , only tainted nodes for which the incoming pod has a toleration, will be included in the calculation of spreading skew. If Ignore , kube-scheduler will not consider the node taints at all in the calculation of spreading skew, so a node with pod untolerated taint will also be included. For backwards compatibility, nodeTaintsPolicy defaults to Ignore . The feature was introduced in v1.25 as alpha. By default, it was disabled, so if you want to use this feature in v1.25, you had to explictly enable the feature gate NodeInclusionPolicyInPodTopologySpread . In the following v1.26 release, that associated feature graduated to beta and is enabled by default. KEP-3243: Respect Pod topology spread after rolling upgrades Pod Topology Spread uses the field labelSelector to identify the group of pods over which spreading will be calculated. When using topology spreading with Deployments, it is common practice to use the labelSelector of the Deployment as the labelSelector in the topology spread constraints. However, this implies that all pods of a Deployment are part of the spreading calculation, regardless of whether they belong to different revisions. As a result, when a new revision is rolled out, spreading will apply across pods from both the old and new ReplicaSets, and so by the time the new ReplicaSet is completely rolled out and the old one is rolled back, the actual spreading we are left with may not match expectations because the deleted pods from the older ReplicaSet will cause skewed distribution for the remaining pods. To avoid this problem, in the past users needed to add a revision label to Deployment and update it manually at each rolling upgrade (both the label on the pod template and the labelSelector in the topologySpreadConstraints ). To solve this problem with a simpler API, Kubernetes v1.25 introduced a new field named matchLabelKeys to topologySpreadConstraints . matchLabelKeys is a list of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the labels of the Pod being scheduled, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. With matchLabelKeys , you don't need to update the pod.spec between different revisions. The controller or operator managing rollouts just needs to set different values to the same label key for different revisions. The scheduler will assume the values automatically based on matchLabelKeys . For example, if you are configuring a Deployment, you can use the label keyed with pod-template-hash , which is added automatically by the Deployment controller, to distinguish between different revisions in a single Deployment. topologySpreadConstraints : - maxSkew : 1 topologyKey : kubernetes.io/hostname whenUnsatisfiable : DoNotSchedule labelSelector : matchLabels : app : foo matchLabelKeys : - pod-template-hash Getting involved These features are managed by Kubernetes SIG Scheduling . Please join us and share your feedback. We look forward to hearing from you! How can I learn more? Pod Topology Spread Constraints in the Kubernetes documentation KEP-3022: min domains in Pod Topology Spread KEP-3094: Take taints/tolerations into consideration when calculating PodTopologySpread skew KEP-3243: Respect PodTopologySpread after rolling upgrades
·kubernetes.io·
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta