Suggested Reads

Suggested Reads

54937 bookmarks
Newest
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Author: Jan Šafránek (Red Hat) The problem On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally the container runtime that applies SELinux labels to a Pod and all its volumes. Kubernetes only passes the SELinux label from a Pod's securityContext fields to the container runtime. The container runtime then recursively changes SELinux label on all files that are visible to the Pod's containers. This can be time-consuming if there are many files on the volume, especially when the volume is on a remote filesystem. Note If a container uses subPath of a volume, only that subPath of the whole volume is relabeled. This allows two pods that have two different SELinux labels to use the same volume, as long as they use different subpaths of it. If a Pod does not have any SELinux label assigned in Kubernetes API, the container runtime assigns a unique random one, so a process that potentially escapes the container boundary cannot access data of any other container on the host. The container runtime still recursively relabels all pod volumes with this random SELinux label. Improvement using mount options If a Pod and its volume meet all of the following conditions, Kubernetes will mount the volume directly with the right SELinux label. Such mount will happen in a constant time and the container runtime will not need to recursively relabel any files on it. The operating system must support SELinux. Without SELinux support detected, kubelet and the container runtime do not do anything with regard to SELinux. The feature gates ReadWriteOncePod and SELinuxMountReadWriteOncePod must be enabled. These feature gates are Beta in Kubernetes 1.27 and Alpha in 1.25. With any of these feature gates disabled, SELinux labels will be always applied by the container runtime by a recursive walk through the volume (or its subPaths). The Pod must have at least seLinuxOptions.level assigned in its Pod Security Context or all Pod containers must have it set in their Security Contexts . Kubernetes will read the default user , role and type from the operating system defaults (typically system_u , system_r and container_t ). Without Kubernetes knowing at least the SELinux level , the container runtime will assign a random one after the volumes are mounted. The container runtime will still relabel the volumes recursively in that case. The volume must be a Persistent Volume with Access Mode ReadWriteOncePod . This is a limitation of the initial implementation. As described above, two Pods can have a different SELinux label and still use the same volume, as long as they use a different subPath of it. This use case is not possible when the volumes are mounted with the SELinux label, because the whole volume is mounted and most filesystems don't support mounting a single volume multiple times with multiple SELinux labels. If running two Pods with two different SELinux contexts and using different subPaths of the same volume is necessary in your deployments, please comment in the KEP issue (or upvote any existing comment - it's best not to duplicate). Such pods may not run when the feature is extended to cover all volume access modes. The volume plugin or the CSI driver responsible for the volume supports mounting with SELinux mount options. These in-tree volume plugins support mounting with SELinux mount options: fc , iscsi , and rbd . CSI drivers that support mounting with SELinux mount options must announce that in their CSIDriver instance by setting seLinuxMount field. Volumes managed by other volume plugins or CSI drivers that don't set seLinuxMount: true will be recursively relabelled by the container runtime. Mounting with SELinux context When all aforementioned conditions are met, kubelet will pass -o context=SELinux label mount option to the volume plugin or CSI driver. CSI driver vendors must ensure that this mount option is supported by their CSI driver and, if necessary, the CSI driver appends other mount options that are needed for -o context to work. For example, NFS may need -o context=SELinux label,nosharecache , so each volume mounted from the same NFS server can have a different SELinux label value. Similarly, CIFS may need -o context=SELinux label,nosharesock . It's up to the CSI driver vendor to test their CSI driver in a SELinux enabled environment before setting seLinuxMount: true in the CSIDriver instance. How can I learn more? SELinux in containers: see excellent visual SELinux guide by Daniel J Walsh. Note that the guide is older than Kubernetes, it describes Multi-Category Security (MCS) mode using virtual machines as an example, however, a similar concept is used for containers. See a series of blog posts for details how exactly SELinux is applied to containers by container runtimes: How SELinux separates containers using Multi-Level Security Why you should be using Multi-Category Security for your Linux containers Read the KEP: Speed up SELinux volume relabeling using mounts
·kubernetes.io·
Blog: Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)
Nine more US states join federal lawsuit against Google over ad tech
Nine more US states join federal lawsuit against Google over ad tech
Nine states, including Michigan and Nebraska, have joined a U.S. Department of Justice lawsuit against Alphabet's Google which alleges the search and advertising company broke antitrust law in running its digital advertising business, the department said on Monday.
·reuters.com·
Nine more US states join federal lawsuit against Google over ad tech
Kubernetes Network Policies Explained
Kubernetes Network Policies Explained
What are Kubernetes Network Policies and how to use them? In this video, I'll show you how to use Kubernetes Network Policies to restrict access between Pods. I'll also show you the pros and cons of k8s Network Policies. #kubernetes #k8s #kubernetesnetworking Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Gist with the commands: https://gist.github.com/vfarcic/f67624c05df3d949c8d9a6976adb4631 🔗 Kubernetes Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies ▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below). ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/ ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction To Kubernetes Network Policies 04:07 What Are Kubernetes Network Policies? 06:12 Applications Without Network Policies 08:13 Kubernetes Network Policies In Action 15:20 Pros And Cons Of Kubernetes Network Policies
·youtube.com·
Kubernetes Network Policies Explained
How to Test Your Proxy Support in Docker
How to Test Your Proxy Support in Docker
This post explains and demonstrates how to ensure that when a proxy is configured, all outgoing requests are only ever made through the proxy (and test in Docker).
·appsmith.com·
How to Test Your Proxy Support in Docker
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta
Authors: Alex Wang (Shopee), Kante Yin (DaoCloud), Kensei Nakada (Mercari) In Kubernetes v1.19, Pod topology spread constraints went to general availability (GA). As time passed, we - SIG Scheduling - received feedback from users, and, as a result, we're actively working on improving the Topology Spread feature via three KEPs. All of these features have reached beta in Kubernetes v1.27 and are enabled by default. This blog post introduces each feature and the use case behind each of them. KEP-3022: min domains in Pod Topology Spread Pod Topology Spread has the maxSkew parameter to define the degree to which Pods may be unevenly distributed. But, there wasn't a way to control the number of domains over which we should spread. Some users want to force spreading Pods over a minimum number of domains, and if there aren't enough already present, make the cluster-autoscaler provision them. Kubernetes v1.24 introduced the minDomains parameter for pod topology spread constraints, as an alpha feature. Via minDomains parameter, you can define the minimum number of domains. For example, assume there are 3 Nodes with the enough capacity, and a newly created ReplicaSet has the following topologySpreadConstraints in its Pod template. ... topologySpreadConstraints : - maxSkew : 1 minDomains : 5 # requires 5 Nodes at least (because each Node has a unique hostname). whenUnsatisfiable : DoNotSchedule # minDomains is valid only when DoNotSchedule is used. topologyKey : kubernetes.io/hostname labelSelector : matchLabels : foo : bar In this case, 3 Pods will be scheduled to those 3 Nodes, but other 2 Pods from this replicaset will be unschedulable until more Nodes join the cluster. You can imagine that the cluster autoscaler provisions new Nodes based on these unschedulable Pods, and as a result, the replicas are finally spread over 5 Nodes. KEP-3094: Take taints/tolerations into consideration when calculating podTopologySpread skew Before this enhancement, when you deploy a pod with podTopologySpread configured, kube-scheduler would take the Nodes that satisfy the Pod's nodeAffinity and nodeSelector into consideration in filtering and scoring, but would not care about whether the node taints are tolerated by the incoming pod or not. This may lead to a node with untolerated taint as the only candidate for spreading, and as a result, the pod will stuck in Pending if it doesn't tolerate the taint. To allow more fine-gained decisions about which Nodes to account for when calculating spreading skew, Kubernetes 1.25 introduced two new fields within topologySpreadConstraints to define node inclusion policies: nodeAffinityPolicy and nodeTaintPolicy . A manifest that applies these policies looks like the following: apiVersion : v1 kind : Pod metadata : name : example-pod spec : # Configure a topology spread constraint topologySpreadConstraints : - maxSkew : integer # ... nodeAffinityPolicy : [Honor|Ignore] nodeTaintsPolicy : [Honor|Ignore] # other Pod fields go here The nodeAffinityPolicy field indicates how Kubernetes treats a Pod's nodeAffinity or nodeSelector for pod topology spreading. If Honor , kube-scheduler filters out nodes not matching nodeAffinity /nodeSelector in the calculation of spreading skew. If Ignore , all nodes will be included, regardless of whether they match the Pod's nodeAffinity /nodeSelector or not. For backwards compatibility, nodeAffinityPolicy defaults to Honor . The nodeTaintsPolicy field defines how Kubernetes considers node taints for pod topology spreading. If Honor , only tainted nodes for which the incoming pod has a toleration, will be included in the calculation of spreading skew. If Ignore , kube-scheduler will not consider the node taints at all in the calculation of spreading skew, so a node with pod untolerated taint will also be included. For backwards compatibility, nodeTaintsPolicy defaults to Ignore . The feature was introduced in v1.25 as alpha. By default, it was disabled, so if you want to use this feature in v1.25, you had to explictly enable the feature gate NodeInclusionPolicyInPodTopologySpread . In the following v1.26 release, that associated feature graduated to beta and is enabled by default. KEP-3243: Respect Pod topology spread after rolling upgrades Pod Topology Spread uses the field labelSelector to identify the group of pods over which spreading will be calculated. When using topology spreading with Deployments, it is common practice to use the labelSelector of the Deployment as the labelSelector in the topology spread constraints. However, this implies that all pods of a Deployment are part of the spreading calculation, regardless of whether they belong to different revisions. As a result, when a new revision is rolled out, spreading will apply across pods from both the old and new ReplicaSets, and so by the time the new ReplicaSet is completely rolled out and the old one is rolled back, the actual spreading we are left with may not match expectations because the deleted pods from the older ReplicaSet will cause skewed distribution for the remaining pods. To avoid this problem, in the past users needed to add a revision label to Deployment and update it manually at each rolling upgrade (both the label on the pod template and the labelSelector in the topologySpreadConstraints ). To solve this problem with a simpler API, Kubernetes v1.25 introduced a new field named matchLabelKeys to topologySpreadConstraints . matchLabelKeys is a list of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the labels of the Pod being scheduled, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. With matchLabelKeys , you don't need to update the pod.spec between different revisions. The controller or operator managing rollouts just needs to set different values to the same label key for different revisions. The scheduler will assume the values automatically based on matchLabelKeys . For example, if you are configuring a Deployment, you can use the label keyed with pod-template-hash , which is added automatically by the Deployment controller, to distinguish between different revisions in a single Deployment. topologySpreadConstraints : - maxSkew : 1 topologyKey : kubernetes.io/hostname whenUnsatisfiable : DoNotSchedule labelSelector : matchLabels : app : foo matchLabelKeys : - pod-template-hash Getting involved These features are managed by Kubernetes SIG Scheduling . Please join us and share your feedback. We look forward to hearing from you! How can I learn more? Pod Topology Spread Constraints in the Kubernetes documentation KEP-3022: min domains in Pod Topology Spread KEP-3094: Take taints/tolerations into consideration when calculating PodTopologySpread skew KEP-3243: Respect PodTopologySpread after rolling upgrades
·kubernetes.io·
Blog: Kubernetes 1.27: More fine-grained pod topology spread policies reached beta
At 95 it's high time to retire: Federal Circuit's chief judge deserves respect and trust for her complaint over Judge Newman's alleged inability and misconduct
At 95 it's high time to retire: Federal Circuit's chief judge deserves respect and trust for her complaint over Judge Newman's alleged inability and misconduct
I know that the position I'm taking here is not going to be popular with a significant part of this blog's audience. On LinkedIn and other w...
·fosspatents.com·
At 95 it's high time to retire: Federal Circuit's chief judge deserves respect and trust for her complaint over Judge Newman's alleged inability and misconduct
m1k1o/neko
m1k1o/neko
A self hosted virtual browser that runs in docker and uses WebRTC.
·github.com·
m1k1o/neko
The leaker was in the same career field as me; by default, he was granted a top secret clearance. Policies need to change here. I have more unneeded classified info in my head than actionable info. | The military loved Discord for Gen Z recruiting. Then the leaks began.
The leaker was in the same career field as me; by default, he was granted a top secret clearance. Policies need to change here. I have more unneeded classified info in my head than actionable info. | The military loved Discord for Gen Z recruiting. Then the leaks began.
·washingtonpost.com·
The leaker was in the same career field as me; by default, he was granted a top secret clearance. Policies need to change here. I have more unneeded classified info in my head than actionable info. | The military loved Discord for Gen Z recruiting. Then the leaks began.
The End of Faking It in Silicon Valley
The End of Faking It in Silicon Valley
Recent charges, convictions and sentences all indicate that the start-up world’s habit of playing fast and loose with the truth actually has consequences.
·nytimes.com·
The End of Faking It in Silicon Valley