1_r/devopsish

1_r/devopsish

54564 bookmarks
Custom sorting
Blog: Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades
Blog: Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades
Author: Richa Banker (Google) This blog describes the mixed version proxy , a new alpha feature in Kubernetes 1.28. The mixed version proxy enables an HTTP request for a resource to be served by the correct API server in cases where there are multiple API servers at varied versions in a cluster. For example, this is useful during a cluster upgrade, or when you're rolling out the runtime configuration of the cluster's control plane. What problem does this solve? When a cluster undergoes an upgrade, the kube-apiservers existing at different versions in that scenario can serve different sets (groups, versions, resources) of built-in resources. A resource request made in this scenario may be served by any of the available apiservers, potentially resulting in the request ending up at an apiserver that may not be aware of the requested resource; consequently it being served a 404 not found error which is incorrect. Furthermore, incorrect serving of the 404 errors can lead to serious consequences such as namespace deletion being blocked incorrectly or objects being garbage collected mistakenly. How do we solve the problem? The new feature “Mixed Version Proxy” provides the kube-apiserver with the capability to proxy a request to a peer kube-apiserver which is aware of the requested resource and hence can serve the request. To do this, a new filter has been added to the handler chain in the API server's aggregation layer. The new filter in the handler chain checks if the request is for a group/version/resource that the apiserver doesn't know about (using the existing StorageVersion API ). If so, it proxies the request to one of the apiservers that is listed in the ServerStorageVersion object. If the identified peer apiserver fails to respond (due to reasons like network connectivity, race between the request being received and the controller registering the apiserver-resource info in ServerStorageVersion object), then error 503("Service Unavailable") is served. To prevent indefinite proxying of the request, a (new for v1.28) HTTP header X-Kubernetes-APIServer-Rerouted: true is added to the original request once it is determined that the request cannot be served by the original API server. Setting that to true marks that the original API server couldn't handle the request and it should therefore be proxied. If a destination peer API server sees this header, it never proxies the request further. To set the network location of a kube-apiserver that peers will use to proxy requests, the value passed in --advertise-address or (when --advertise-address is unspecified) the --bind-address flag is used. For users with network configurations that would not allow communication between peer kube-apiservers using the addresses specified in these flags, there is an option to pass in the correct peer address as --peer-advertise-ip and --peer-advertise-port flags that are introduced in this feature. How do I enable this feature? Following are the required steps to enable the feature: Download the latest Kubernetes project (version v1.28.0 or later) Switch on the feature gate with the command line flag --feature-gates=UnknownVersionInteroperabilityProxy=true on the kube-apiservers Pass the CA bundle that will be used by source kube-apiserver to authenticate destination kube-apiserver's serving certs using the flag --peer-ca-file on the kube-apiservers. Note: this is a required flag for this feature to work. There is no default value enabled for this flag. Pass the correct ip and port of the local kube-apiserver that will be used by peers to connect to this kube-apiserver while proxying a request. Use the flags --peer-advertise-ip and peer-advertise-port to the kube-apiservers upon startup. If unset, the value passed to either --advertise-address or --bind-address is used. If those too, are unset, the host's default interface will be used. What’s missing? Currently we only proxy resource requests to a peer kube-apiserver when its determined to do so. Next we need to address how to work discovery requests in such scenarios. Right now we are planning to have the following capabilities for beta Merged discovery across all kube-apiservers Use an egress dialer for network connections made to peer kube-apiservers How can I learn more? Read the Mixed Version Proxy documentation Read KEP-4020: Unknown Version Interoperability Proxy How can I get involved? Reach us on Slack : #sig-api-machinery , or through the mailing list . Huge thanks to the contributors that have helped in the design, implementation, and review of this feature: Daniel Smith, Han Kang, Joe Betz, Jordan Liggit, Antonio Ojea, David Eads and Ben Luddy!
·kubernetes.io·
Blog: Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades
How TechWorld with Nana Spreads DevOps Skills to Millions
How TechWorld with Nana Spreads DevOps Skills to Millions
The creator of an online educational empire discusses why DevOps is needed now more than ever and how this is the perfect time to begin a career in DevOps.
·thenewstack.io·
How TechWorld with Nana Spreads DevOps Skills to Millions
Ampere Computing Publishes Guide For Steam Play Games On Their AArch64 Server CPUs
Ampere Computing Publishes Guide For Steam Play Games On Their AArch64 Server CPUs
While Ampere Computing's wares with the Altra (Max) and forthcoming AmpereOne families of AArch64 server processors are designed for the data center, if you feel so inclined they have published a guide on being able to run Steam for Linux on these ARM64 processors -- including Steam Play (Proton) for enjoying Windows games on these Linux servers.
·phoronix.com·
Ampere Computing Publishes Guide For Steam Play Games On Their AArch64 Server CPUs
Introducing the Enterprise Contract
Introducing the Enterprise Contract
You may have heard of sigstore and its container image verification tool, cosign. This blog post introduces a policy-driven workflow, Enterprise Contract, built on those technologies.
·enterprisecontract.dev·
Introducing the Enterprise Contract
This worked so well for Evernote… | The 100-Year Plan on WordPress.com
This worked so well for Evernote… | The 100-Year Plan on WordPress.com
Read the announcement Crafting Legacies,One Century at a Time Request the 100-Year Plan Read the announcement The 100-Year Plan ensures that your stories, achievements, and memories are p…
·wordpress.com·
This worked so well for Evernote… | The 100-Year Plan on WordPress.com
Conway's Law and Kubernetes
Conway's Law and Kubernetes
I’ve been spending a lot of time in the last couple of weeks doing non-technical work, so in this post I’m again going to go for a less-technical topic and explore some thoughts I’ve been having around the Kubernetes project as a whole, and how it’s organized. It should be noted that I do occasionally contribute to the Kubernetes project, as well as review some PRs from time to time, but I don’t have any inside knowledge into how CNCF (the organization managing Kubernetes) works or how/why we got here. For the purposes of this blog post I’m an interested outsider :)
·blog.appliedcomputing.io·
Conway's Law and Kubernetes
Giving up the iPad-only travel dream
Giving up the iPad-only travel dream
Every time any of us packs a bag, we are making some very specific tech-focused decisions. It starts with what devices we need (or can live without) and cascades into charging bricks and cords and …
·sixcolors.com·
Giving up the iPad-only travel dream
CVE-2020-19909 | Ubuntu
CVE-2020-19909 | Ubuntu
Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.
·ubuntu.com·
CVE-2020-19909 | Ubuntu
Another Plea to the Reminders Team, This Time About the Today View
Another Plea to the Reminders Team, This Time About the Today View
After getting my much-loved badge change in Reminders last year, I am back with another request, now as a full-time Reminders user. The Today view is the most important UI in Reminders, as it pulls in overdue and due tasks into one place. Here is what mine looks like right now: As you can see, [...]
·512pixels.net·
Another Plea to the Reminders Team, This Time About the Today View
Critical security vulnerability in Alertmanager
Critical security vulnerability in Alertmanager
Hey everyone! @SimonHiker and I just released Alertmanager v0.26, which includes a critical security fix and a hefty number of enhancements - please try it and drop some feedback! ❤️https://t.co/KJVgXONpeZ— Josue Abreu (@gotjosh) August 24, 2023
·x.com·
Critical security vulnerability in Alertmanager
Enhancing GitOps Workflows with Botkube
Enhancing GitOps Workflows with Botkube
Transform your GitOps process with Botkube: Bid farewell to obstacles and welcome smooth productivity!
·botkube.io·
Enhancing GitOps Workflows with Botkube
Blog: Kubernetes v1.28: Introducing native sidecar containers
Blog: Kubernetes v1.28: Introducing native sidecar containers
Authors: Todd Neal (AWS), Matthias Bertschy (ARMO), Sergey Kanzhelev (Google), Gunju Kim (NAVER), Shannon Kularathna (Google) This post explains how to use the new sidecar feature, which enables restartable init containers and is available in alpha in Kubernetes 1.28. We want your feedback so that we can graduate this feature as soon as possible. The concept of a “sidecar” has been part of Kubernetes since nearly the very beginning. In 2015, sidecars were described in a blog post about composite containers as additional containers that “extend and enhance the ‘main’ container”. Sidecar containers have become a common Kubernetes deployment pattern and are often used for network proxies or as part of a logging system. Until now, sidecars were a concept that Kubernetes users applied without native support. The lack of native support has caused some usage friction, which this enhancement aims to resolve. What are sidecar containers in 1.28? Kubernetes 1.28 adds a new restartPolicy field to init containers that is available when the SidecarContainers feature gate is enabled. apiVersion : v1 kind : Pod spec : initContainers : - name : secret-fetch image : secret-fetch:1.0 - name : network-proxy image : network-proxy:1.0 restartPolicy : Always containers : ... The field is optional and, if set, the only valid value is Always. Setting this field changes the behavior of init containers as follows: The container restarts if it exits Any subsequent init container starts immediately after the startupProbe has successfully completed instead of waiting for the restartable init container to exit The resource usage calculation changes for the pod as restartable init container resources are now added to the sum of the resource requests by the main containers Pod termination continues to only depend on the main containers. An init container with a restartPolicy of Always (named a sidecar) won't prevent the pod from terminating after the main containers exit. The following properties of restartable init containers make them ideal for the sidecar deployment pattern: Init containers have a well-defined startup order regardless of whether you set a restartPolicy , so you can ensure that your sidecar starts before any container declarations that come after the sidecar declaration in your manifest. Sidecar containers don't extend the lifetime of the Pod, so you can use them in short-lived Pods with no changes to the Pod lifecycle. Sidecar containers are restarted on exit, which improves resilience and lets you use sidecars to provide services that your main containers can more reliably consume. When to use sidecar containers You might find built-in sidecar containers useful for workloads such as the following: Batch or AI/ML workloads , or other Pods that run to completion. These workloads will experience the most significant benefits. Network proxies that start up before any other container in the manifest. Every other container that runs can use the proxy container's services. For instructions, see the Kubernetes Native sidecars in Istio blog post . Log collection containers , which can now start before any other container and run until the Pod terminates. This improves the reliability of log collection in your Pods. Jobs , which can use sidecars for any purpose without Job completion being blocked by the running sidecar. No additional configuration is required to ensure this behavior. How did users get sidecar behavior before 1.28? Prior to the sidecar feature, the following options were available for implementing sidecar behavior depending on the desired lifetime of the sidecar container: Lifetime of sidecar less than Pod lifetime : Use an init container, which provides well-defined startup order. However, the sidecar has to exit for other init containers and main Pod containers to start. Lifetime of sidecar equal to Pod lifetime : Use a main container that runs alongside your workload containers in the Pod. This method doesn't give you control over startup order, and lets the sidecar container potentially block Pod termination after the workload containers exit. The built-in sidecar feature solves for the use case of having a lifetime equal to the Pod lifetime and has the following additional benefits: Provides control over startup order Doesn’t block Pod termination Transitioning existing sidecars to the new model We recommend only using the sidecars feature gate in short lived testing clusters at the alpha stage. If you have an existing sidecar that is configured as a main container so it can run for the lifetime of the pod, it can be moved to the initContainers section of the pod spec and given a restartPolicy of Always . In many cases, the sidecar should work as before with the added benefit of having a defined startup ordering and not prolonging the pod lifetime. Known issues The alpha release of built-in sidecar containers has the following known issues, which we'll resolve before graduating the feature to beta: The CPU, memory, device, and topology manager are unaware of the sidecar container lifetime and additional resource usage, and will operate as if the Pod had lower resource requests than it actually does. The output of kubectl describe node is incorrect when sidecars are in use. The output shows resource usage that's lower than the actual usage because it doesn't use the new resource usage calculation for sidecar containers. We need your feedback! In the alpha stage, we want you to try out sidecar containers in your environments and open issues if you encounter bugs or friction points. We're especially interested in feedback about the following: The shutdown sequence, especially with multiple sidecars running The backoff timeout adjustment for crashing sidecars The behavior of Pod readiness and liveness probes when sidecars are running To open an issue, see the Kubernetes GitHub repository . What’s next? In addition to the known issues that will be resolved, we're working on adding termination ordering for sidecar and main containers. This will ensure that sidecar containers only terminate after the Pod's main containers have exited. We’re excited to see the sidecar feature come to Kubernetes and are interested in feedback. Acknowledgements Many years have passed since the original KEP was written, so we apologize if we omit anyone who worked on this feature over the years. This is a best-effort attempt to recognize the people involved in this effort. mrunalp for design discussions and reviews thockin for API discussions and support thru years bobbypage for reviews smarterclayton for detailed review and feedback howardjohn for feedback over years and trying it early during implementation derekwaynecarr and dchen1107 for leadership jpbetz for API and termination ordering designs as well as code reviews Joseph-Irving and rata for the early iterations design and reviews years back swatisehgal and ffromani for early feedback on resource managers impact alculquicondor for feedback on addressing the version skew of the scheduler wojtek-t for PRR review of a KEP ahg-g for reviewing the scheduler portion of a KEP adisky for the Job completion issue More Information Read API for sidecar containers in the Kubernetes documentation Read the Sidecar KEP
·kubernetes.io·
Blog: Kubernetes v1.28: Introducing native sidecar containers
Blog: Kubernetes 1.28: Beta support for using swap on Linux
Blog: Kubernetes 1.28: Beta support for using swap on Linux
Author: Itamar Holder (Red Hat) The 1.22 release introduced Alpha support for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis. Now, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many new improvements. Prior to version 1.22, Kubernetes did not provide support for swap memory on Linux systems. This was due to the inherent difficulty in guaranteeing and accounting for pod memory utilization when swap memory was involved. As a result, swap support was deemed out of scope in the initial design of Kubernetes, and the default behavior of a kubelet was to fail to start if swap memory was detected on a node. In version 1.22, the swap feature for Linux was initially introduced in its Alpha stage. This represented a significant advancement, providing Linux users with the opportunity to experiment with the swap feature for the first time. However, as an Alpha version, it was not fully developed and had several issues, including inadequate support for cgroup v2, insufficient metrics and summary API statistics, inadequate testing, and more. Swap in Kubernetes has numerous use cases for a wide range of users. As a result, the node special interest group within the Kubernetes project has invested significant effort into supporting swap on Linux nodes for beta. Compared to the alpha, the kubelet's support for running with swap enabled is more stable and robust, more user-friendly, and addresses many known shortcomings. This graduation to beta represents a crucial step towards achieving the goal of fully supporting swap in Kubernetes. How do I use it? The utilization of swap memory on a node where it has already been provisioned can be facilitated by the activation of the NodeSwap feature gate on the kubelet. Additionally, you must disable the failSwapOn configuration setting, or the deprecated --fail-swap-on command line flag must be deactivated. It is possible to configure the memorySwap.swapBehavior option to define the manner in which a node utilizes swap memory. For instance, # this fragment goes into the kubelet's configuration file memorySwap : swapBehavior : UnlimitedSwap The available configuration options for swapBehavior are: UnlimitedSwap (default): Kubernetes workloads can use as much swap memory as they request, up to the system limit. LimitedSwap : The utilization of swap memory by Kubernetes workloads is subject to limitations. Only Pods of Burstable QoS are permitted to employ swap. If configuration for memorySwap is not specified and the feature gate is enabled, by default the kubelet will apply the same behaviour as the UnlimitedSwap setting. Note that NodeSwap is supported for cgroup v2 only. For Kubernetes v1.28, using swap along with cgroup v1 is no longer supported. Install a swap-enabled cluster with kubeadm Before you begin It is required for this demo that the kubeadm tool be installed, following the steps outlined in the kubeadm installation guide . If swap is already enabled on the node, cluster creation may proceed. If swap is not enabled, please refer to the provided instructions for enabling swap. Create a swap file and turn swap on I'll demonstrate creating 4GiB of unencrypted swap. dd if = /dev/zero of = /swapfile bs = 128M count = 32 chmod 600 /swapfile mkswap /swapfile swapon /swapfile swapon -s # enable the swap file only until this node is rebooted To start the swap file at boot time, add line like /swapfile swap swap defaults 0 0 to /etc/fstab file. Set up a Kubernetes cluster that uses swap-enabled nodes To make things clearer, here is an example kubeadm configuration file kubeadm-config.yaml for the swap enabled cluster. --- apiVersion : "kubeadm.k8s.io/v1beta3" kind : InitConfiguration --- apiVersion : kubelet.config.k8s.io/v1beta1 kind : KubeletConfiguration failSwapOn : false featureGates : NodeSwap : true memorySwap : swapBehavior : LimitedSwap Then create a single-node cluster using kubeadm init --config kubeadm-config.yaml . During init, there is a warning that swap is enabled on the node and in case the kubelet failSwapOn is set to true. We plan to remove this warning in a future release. How is the swap limit being determined with LimitedSwap? The configuration of swap memory, including its limitations, presents a significant challenge. Not only is it prone to misconfiguration, but as a system-level property, any misconfiguration could potentially compromise the entire node rather than just a specific workload. To mitigate this risk and ensure the health of the node, we have implemented Swap in Beta with automatic configuration of limitations. With LimitedSwap , Pods that do not fall under the Burstable QoS classification (i.e. BestEffort /Guaranteed Qos Pods) are prohibited from utilizing swap memory. BestEffort QoS Pods exhibit unpredictable memory consumption patterns and lack information regarding their memory usage, making it difficult to determine a safe allocation of swap memory. Conversely, Guaranteed QoS Pods are typically employed for applications that rely on the precise allocation of resources specified by the workload, with memory being immediately available. To maintain the aforementioned security and node health guarantees, these Pods are not permitted to use swap memory when LimitedSwap is in effect. Prior to detailing the calculation of the swap limit, it is necessary to define the following terms: nodeTotalMemory : The total amount of physical memory available on the node. totalPodsSwapAvailable : The total amount of swap memory on the node that is available for use by Pods (some swap memory may be reserved for system use). containerMemoryRequest : The container's memory request. Swap limitation is configured as: (containerMemoryRequest / nodeTotalMemory) × totalPodsSwapAvailable In other words, the amount of swap that a container is able to use is proportionate to its memory request, the node's total physical memory and the total amount of swap memory on the node that is available for use by Pods. It is important to note that, for containers within Burstable QoS Pods, it is possible to opt-out of swap usage by specifying memory requests that are equal to memory limits. Containers configured in this manner will not have access to swap memory. How does it work? There are a number of possible ways that one could envision swap use on a node. When swap is already provisioned and available on a node, SIG Node have proposed the kubelet should be able to be configured so that: It can start with swap on. It will direct the Container Runtime Interface to allocate zero swap memory to Kubernetes workloads by default. Swap configuration on a node is exposed to a cluster admin via the memorySwap in the KubeletConfiguration . As a cluster administrator, you can specify the node's behaviour in the presence of swap memory by setting memorySwap.swapBehavior . The kubelet employs the CRI (container runtime interface) API to direct the CRI to configure specific cgroup v2 parameters (such as memory.swap.max ) in a manner that will enable the desired swap configuration for a container. The CRI is then responsible to write these settings to the container-level cgroup. How can I monitor swap? A notable deficiency in the Alpha version was the inability to monitor and introspect swap usage. This issue has been addressed in the Beta version introduced in Kubernetes 1.28, which now provides the capability to monitor swap usage through several different methods. The beta version of kubelet now collects node-level metric statistics , which can be accessed at the /metrics/resource and /stats/summary kubelet HTTP endpoints. This allows clients who can directly interrogate the kubelet to monitor swap usage and remaining swap memory when using LimitedSwap. Additionally, a machine_swap_bytes metric has been added to cadvisor to show the total physical swap capacity of the machine. Caveats Having swap available on a system reduces predictability. Swap's performance is worse than regular memory, sometimes by many orders of magnitude, which can cause unexpected performance regressions. Furthermore, swap changes a system's behaviour under memory pressure. Since enabling swap permits greater memory usage for workloads in Kubernetes that cannot be predictably accounted for, it also increases the risk of noisy neighbours and unexpected packing configurations, as the scheduler cannot account for swap memory usage. The performance of a node with swap memory enabled depends on the underlying physical storage. When swap memory is in use, performance will be significantly worse in an I/O operations per second (IOPS) constrained environment, such as a cloud VM with I/O throttling, when compared to faster storage mediums like solid-state drives or NVMe. As such, we do not advocate the utilization of swap memory for workloads or environments that are subject to performance constraints. Furthermore, it is recommended to employ LimitedSwap , as this significantly mitigates the risks posed to the node. Cluster administrators and developers should benchmark their nodes and applications before using swap in production scenarios, and we need your help with that! Security risk Enabling swap on a system without encryption poses a security risk, as critical information, such as volumes that represent Kubernetes Secrets, may be swapped out to the disk . If an unauthorized individual gains access to the disk, they could potentially obtain these confidential data. To mitigate this risk, the Kubernetes project strongly recommends that you encrypt your swap space. However, handling encrypted swap is not within the scope of kubelet; rather, it is a general OS configuration concern and should be addressed at that level. It is the administrator's responsibility to provision encrypted swap to mit...
·kubernetes.io·
Blog: Kubernetes 1.28: Beta support for using swap on Linux
IBM sells weather business to private equity firm Francisco Partners
IBM sells weather business to private equity firm Francisco Partners
International Business Machines has agreed to sell its weather business to private equity firm Francisco Partners for an undisclosed sum, the technology services giant said on Tuesday.
·reuters.com·
IBM sells weather business to private equity firm Francisco Partners