Suggested Reads

Suggested Reads

54938 bookmarks
Newest
Git – The Good, The Bad and The Ugly
Git – The Good, The Bad and The Ugly
Have you ever found yourself in a situation where you accidentally pushed secret keys or huge files while using Git? Did you know that removing those keys even 20 seconds after might be already too late?
·codingwoman.com·
Git – The Good, The Bad and The Ugly
Blog: Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets
Blog: Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets
Authors: Filip Křepinský (Red Hat), Morten Torkildsen (Google), Ravi Gudimetla (Apple) Ensuring the disruptions to your applications do not affect its availability isn't a simple task. Last month's release of Kubernetes v1.26 lets you specify an unhealthy pod eviction policy for PodDisruptionBudgets (PDBs) to help you maintain that availability during node management operations. In this article, we will dive deeper into what modifications were introduced for PDBs to give application owners greater flexibility in managing disruptions. What problems does this solve? API-initiated eviction of pods respects PodDisruptionBudgets (PDBs). This means that a requested voluntary disruption via an eviction to a Pod, should not disrupt a guarded application and .status.currentHealthy of a PDB should not fall below .status.desiredHealthy . Running pods that are Unhealthy do not count towards the PDB status, but eviction of these is only possible in case the application is not disrupted. This helps disrupted or not yet started application to achieve availability as soon as possible without additional downtime that would be caused by evictions. Unfortunately, this poses a problem for cluster administrators that would like to drain nodes without any manual interventions. Misbehaving applications with pods in CrashLoopBackOff state (due to a bug or misconfiguration) or pods that are simply failing to become ready make this task much harder. Any eviction request will fail due to violation of a PDB, when all pods of an application are unhealthy. Draining of a node cannot make any progress in that case. On the other hand there are users that depend on the existing behavior, in order to: prevent data-loss that would be caused by deleting pods that are guarding an underlying resource or storage achieve the best availability possible for their application Kubernetes 1.26 introduced a new experimental field to the PodDisruptionBudget API: .spec.unhealthyPodEvictionPolicy . When enabled, this field lets you support both of those requirements. How does it work? API-initiated eviction is the process that triggers graceful pod termination. The process can be initiated either by calling the API directly, by using a kubectl drain command, or other actors in the cluster. During this process every pod removal is consulted with appropriate PDBs, to ensure that a sufficient number of pods is always running in the cluster. The following policies allow PDB authors to have a greater control how the process deals with unhealthy pods. There are two policies IfHealthyBudget and AlwaysAllow to choose from. The former, IfHealthyBudget , follows the existing behavior to achieve the best availability that you get by default. Unhealthy pods can be disrupted only if their application has a minimum available .status.desiredHealthy number of pods. By setting the spec.unhealthyPodEvictionPolicy field of your PDB to AlwaysAllow , you are choosing the best effort availability for your application. With this policy it is always possible to evict unhealthy pods. This will make it easier to maintain and upgrade your clusters. We think that AlwaysAllow will often be a better choice, but for some critical workloads you may still prefer to protect even unhealthy Pods from node drains or other forms of API-initiated eviction. How do I use it? This is an alpha feature, which means you have to enable the PDBUnhealthyPodEvictionPolicy feature gate , with the command line argument --feature-gates=PDBUnhealthyPodEvictionPolicy=true to the kube-apiserver. Here's an example. Assume that you've enabled the feature gate in your cluster, and that you already defined a Deployment that runs a plain webserver. You labelled the Pods for that Deployment with app: nginx . You want to limit avoidable disruption, and you know that best effort availability is sufficient for this app. You decide to allow evictions even if those webserver pods are unhealthy. You create a PDB to guard this application, with the AlwaysAllow policy for evicting unhealthy pods: apiVersion : policy/v1 kind : PodDisruptionBudget metadata : name : nginx-pdb spec : selector : matchLabels : app : nginx maxUnavailable : 1 unhealthyPodEvictionPolicy : AlwaysAllow How can I learn more? Read the KEP: Unhealthy Pod Eviction Policy for PDBs Read the documentation: Unhealthy Pod Eviction Policy for PodDisruptionBudgets Review the Kubernetes documentation for PodDisruptionBudgets , draining of Nodes and evictions How do I get involved? If you have any feedback, please reach out to us in the #sig-apps channel on Slack (visit https://slack.k8s.io/ for an invitation if you need one), or on the SIG Apps mailing list: kubernetes-sig-apps@googlegroups.com
·kubernetes.io·
Blog: Kubernetes 1.26: Eviction policy for unhealthy pods guarded by PodDisruptionBudgets
What Is a Pig Butchering Scam?
What Is a Pig Butchering Scam?
This type of devastating scheme ensnares victims and takes them for all they’re worth—and the threat is only growing.
·wired.com·
What Is a Pig Butchering Scam?
jessitron (@jessitron@hachyderm.io)
jessitron (@jessitron@hachyderm.io)
These apprentice tailors learn backwards: first the last thing you do (sewing buttons, hemming), and then the thing before that, until finally learning to cut. Understand the ramifications of what you're doing before you do it. Like if you learned to support software before you learned to deploy it, and only then did you try cutting code https://podcast.oddly-influenced.dev/episodes/legitimate-peripheral-participation
·hachyderm.io·
jessitron (@jessitron@hachyderm.io)
Updated whitepaper available: AWS Security Incident Response Guide | Amazon Web Services
Updated whitepaper available: AWS Security Incident Response Guide | Amazon Web Services
The AWS Security Incident Response Guide focuses on the fundamentals of responding to security incidents within a customer’s Amazon Web Services (AWS) Cloud environment. You can use the guide to help build and iterate on your AWS security incident response program. Recently, we updated the AWS Security Incident Response Guide to more clearly explain what […]
·aws.amazon.com·
Updated whitepaper available: AWS Security Incident Response Guide | Amazon Web Services
Blog: Kubernetes 1.26: Retroactive Default StorageClass
Blog: Kubernetes 1.26: Retroactive Default StorageClass
Author: Roman Bednář (Red Hat) The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC). With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later. This feature was graduated to beta in Kubernetes 1.26. You can read retroactive default StorageClass assignment in the Kubernetes documentation for more details about how to use that, or you can read on to learn about why the Kubernetes project is making this change. Why did StorageClass assignment need improvements Users might already be familiar with a similar feature that assigns default StorageClasses to new PVCs at the time of creation. This is currently handled by the admission controller . But what if there wasn't a default StorageClass defined at the time of PVC creation? Users would end up with a PVC that would never be assigned a class. As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point. Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road. Let's take a closer look at each of them. Changing default StorageClass With the alpha feature enabled, there were two options admins had when they wanted to change the default StorageClass: Creating a new StorageClass as default before removing the old one associated with the PVC. This would result in having two defaults for a short period. At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC. Removing the old default first and creating a new default StorageClass. This would result in having no default for a short time. Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the PVC would be in Pending state forever. The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available. Resource ordering during cluster installation If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right. This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined. What changed We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to null . We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name. Null storageClassName versus storageClassName: "" - does it matter? Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to null or "" would bind to an existing PersistentVolume resource with storageClassName also set to null or "" . With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name. With these constraints in mind, the feature changes the semantics of null . If a default StorageClass is present, null would translate to "Give me a default" and "" would mean "Give me PersistentVolume that also has "" StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged. Summarizing the above, we've changed the semantics of null so that its behavior depends on the presence or absence of a definition of default StorageClass. The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated. PVC binding behavior with Retroactive default StorageClass PVC storageClassName = "" PVC storageClassName = null Without default class PV storageClassName = "" binds binds PV without storageClassName binds binds With default class PV storageClassName = "" binds class updates PV without storageClassName binds class updates How to use it If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the --feature-gates command line argument: --feature-gates="...,RetroactiveDefaultStorageClass=true" Test drive If you would like to see the feature in action and verify it works fine in your cluster here's what you can try: Define a basic PersistentVolumeClaim: apiVersion : v1 kind : PersistentVolumeClaim metadata : name : pvc-1 spec : accessModes : - ReadWriteOnce resources : requests : storage : 1Gi Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in Pending state. $ kc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-1 Pending Configure one StorageClass as default. $ kc patch sc -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/my-storageclass patched Verify that PersistentVolumeClaims is now provisioned correctly and was updated retroactively with new default StorageClass. $ kc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-1 Bound pvc-06a964ca-f997-4780-8627-b5c3bf5a87d8 1Gi RWO my-storageclass 87m New metrics To help you see that the feature is working as expected we also introduced a new retroactive_storageclass_total metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and retroactive_storageclass_errors_total to show how many of those attempts failed. Getting involved We always welcome new contributors so if you would like to get involved you can join our Kubernetes Storage Special-Interest-Group (SIG). If you would like to share feedback, you can do so on our public Slack channel . Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order): Deep Debroy (ddebroy ) Divya Mohan (divya-mohan0209 ) Jan Šafránek (jsafrane ) Joe Betz (jpbetz ) Jordan Liggitt (liggitt ) Michelle Au (msau42 ) Seokho Son (seokho-son ) Shannon Kularathna (shannonxtreme ) Tim Bannister (sftim ) Tim Hockin (thockin ) Wojciech Tyczynski (wojtek-t ) Xing Yang (xing-yang )
·kubernetes.io·
Blog: Kubernetes 1.26: Retroactive Default StorageClass
Folks have said that there’s a need for updated container basics for myriad reasons. I’ll try and highlight more content like this. | Back to basics: accessing Kubernetes pods
Folks have said that there’s a need for updated container basics for myriad reasons. I’ll try and highlight more content like this. | Back to basics: accessing Kubernetes pods
Kubernetes is a colossal beast. You need to understand many different concepts before it starts being useful. When everything is set up, you’ll probably want to expose some pods to the outside of the cluster. Kubernetes provides different ways to do it: I’ll describe them in this post.
·blog.frankel.ch·
Folks have said that there’s a need for updated container basics for myriad reasons. I’ll try and highlight more content like this. | Back to basics: accessing Kubernetes pods
Microsoft aims for AI-powered version of Bing - The Information
Microsoft aims for AI-powered version of Bing - The Information
Microsoft Corp is in the works to launch a version of its search engine Bing using the artificial intelligence behind OpenAI-launched chatbot ChatGPT, The Information reported on Tuesday, citing two people with direct knowledge of the plans.
·reuters.com·
Microsoft aims for AI-powered version of Bing - The Information
Large Language Models as Corporate Lobbyists
Large Language Models as Corporate Lobbyists
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine...
·arxiv.org·
Large Language Models as Corporate Lobbyists