1_r/devopsish

1_r/devopsish

54583 bookmarks
Custom sorting
Fedora Program Manager Laid Off As Part Of Red Hat Cuts
Fedora Program Manager Laid Off As Part Of Red Hat Cuts
As part of the Red Hat layoffs announced in April with around a 4% reduction in headcount for the IBM-owneed company, one of the surprising casualties from that round of cost-cutting is the Fedora Program Manager.
·phoronix.com·
Fedora Program Manager Laid Off As Part Of Red Hat Cuts
A look inside the making of an NFL football schedule | Amazon Web Services
A look inside the making of an NFL football schedule | Amazon Web Services
Predicting what fans are going to watch in December, now. What is the NFL up to with AWS and how does it work? In just three months, National Football League (NFL) schedule makers methodically build an exciting 18 week 272-game schedule spanning 576 possible game windows. How do they do it? We caught up with […]
·aws.amazon.com·
A look inside the making of an NFL football schedule | Amazon Web Services
U.S. Universities Are Building a New Semiconductor Workforce
U.S. Universities Are Building a New Semiconductor Workforce
“How do we create a curriculum that allows universities that might not have the infrastructure—say, lab space or trained faculty—to give students semiconductor experience?”—Ayanna Howard, Ohio State University
·spectrum.ieee.org·
U.S. Universities Are Building a New Semiconductor Workforce
White House considers ban on ransom payments, with caveats
White House considers ban on ransom payments, with caveats
Experts suggest the effort, a reversal from the administration's previous stance, is fraught with complications that could cause unintended consequences.
·cybersecuritydive.com·
White House considers ban on ransom payments, with caveats
Navigating the Tech World
Navigating the Tech World
A Comprehensive Guide to Finding, Landing, and Thriving in Your Tech Job Are you passionate about technology and seeking a fulfilling career in the tech industry? Whether you’re a recent grad…
·therain.dev·
Navigating the Tech World
A Case for SPIFFE & SPIRE - Software Identity Management
A Case for SPIFFE & SPIRE - Software Identity Management
How do you manage software identity for your Kubernetes workloads, Kubernetes nodes, virtual machines (VMs) and other software entities? In this video, Lukonde Mwila explains the concepts behind SPIFFE and SPIRE, an open source universal security standard for managing software identity. SPIFFE & SPIRE: https://spiffe.io/ #AWS #Kubernetes #EKS
·youtube.com·
A Case for SPIFFE & SPIRE - Software Identity Management
Justice Department Announces Court-Authorized Disruption of the Snake Malware Network Controlled by Russia's Federal Security Service
Justice Department Announces Court-Authorized Disruption of the Snake Malware Network Controlled by Russia's Federal Security Service
“Russia used sophisticated malware to steal sensitive information from our allies, laundering it through a network of infected computers in the United States in a cynical attempt to conceal their crimes.  Meeting the challenge of cyberespionage requires creativity and a willingness to use all lawful means to protect our nation and our allies,” stated United States Attorney Peace.  “The court-authorized remote search and remediation announced today demonstrates my Office and our partners’ commitment to using all of the tools at our disposal to protect the American people.”
·justice.gov·
Justice Department Announces Court-Authorized Disruption of the Snake Malware Network Controlled by Russia's Federal Security Service
Unconscious Bias Training That Works
Unconscious Bias Training That Works
To become more diverse, equitable, and inclusive, many companies have turned to unconscious bias (UB) training. By raising awareness of the mental shortcuts that lead to snap judgments—often based on race and gender—about people’s talents or character, it strives to make hiring and promotion fairer and improve interactions with customers and among colleagues. But most UB training is ineffective, research shows. The problem is, increasing awareness is not enough—and can even backfire—because sending the message that bias is involuntary and widespread may make it seem unavoidable. UB training that gets results, in contrast, teaches attendees to manage their biases, practice new behaviors, and track their progress. It gives them information that contradicts stereotypes and allows them to connect with colleagues whose experiences are different from theirs. And it’s not a onetime session; it entails a longer journey and structural organizational changes. In this article the authors describe how rigorous UB programs at Microsoft, Starbucks, and other organizations help employees overcome denial and act on their awareness, develop the empathy that combats bias, diversify their networks, and commit to improvement.
·hbr.org·
Unconscious Bias Training That Works
4 Core Principles of GitOps
4 Core Principles of GitOps
It's at the point where GitOps is getting enough notice that a brief on its principles is appropriate. Here are four ground rules to keep in mind.
·thenewstack.io·
4 Core Principles of GitOps
Blog: Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)
Blog: Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)
Author: Vinay Kulkarni (Kubescaler Labs) If you have deployed Kubernetes pods with CPU and/or memory resources specified, you may have noticed that changing the resource values involves restarting the pod. This has been a disruptive operation for running workloads... until now. In Kubernetes v1.27, we have added a new alpha feature that allows users to resize CPU/memory resources allocated to pods without restarting the containers. To facilitate this, the resources field in a pod's containers now allow mutation for cpu and memory resources. They can be changed simply by patching the running pod spec. This also means that resources field in the pod spec can no longer be relied upon as an indicator of the pod's actual resources. Monitoring tools and other such applications must now look at new fields in the pod's status. Kubernetes queries the actual CPU and memory requests and limits enforced on the running containers via a CRI (Container Runtime Interface) API call to the runtime, such as containerd, which is responsible for running the containers. The response from container runtime is reflected in the pod's status. In addition, a new restartPolicy for resize has been added. It gives users control over how their containers are handled when resources are resized. What's new in v1.27? Besides the addition of resize policy in the pod's spec, a new field named allocatedResources has been added to containerStatuses in the pod's status. This field reflects the node resources allocated to the pod's containers. In addition, a new field called resources has been added to the container's status. This field reflects the actual resource requests and limits configured on the running containers as reported by the container runtime. Lastly, a new field named resize has been added to the pod's status to show the status of the last requested resize. A value of Proposed is an acknowledgement of the requested resize and indicates that request was validated and recorded. A value of InProgress indicates that the node has accepted the resize request and is in the process of applying the resize request to the pod's containers. A value of Deferred means that the requested resize cannot be granted at this time, and the node will keep retrying. The resize may be granted when other pods leave and free up node resources. A value of Infeasible is a signal that the node cannot accommodate the requested resize. This can happen if the requested resize exceeds the maximum resources the node can ever allocate for a pod. When to use this feature Here are a few examples where this feature may be useful: Pod is running on node but with either too much or too little resources. Pods are not being scheduled do to lack of sufficient CPU or memory in a cluster that is underutilized by running pods that were overprovisioned. Evicting certain stateful pods that need more resources to schedule them on bigger nodes is an expensive or disruptive operation when other lower priority pods in the node can be resized down or moved. How to use this feature In order to use this feature in v1.27, the InPlacePodVerticalScaling feature gate must be enabled. A local cluster with this feature enabled can be started as shown below: root@vbuild:~/go/src/k8s.io/kubernetes# FEATURE_GATES=InPlacePodVerticalScaling=true ./hack/local-up-cluster.sh go version go1.20.2 linux/arm64 +++ [0320 13:52:02] Building go targets for linux/arm64 k8s.io/kubernetes/cmd/kubectl (static) k8s.io/kubernetes/cmd/kube-apiserver (static) k8s.io/kubernetes/cmd/kube-controller-manager (static) k8s.io/kubernetes/cmd/cloud-controller-manager (non-static) k8s.io/kubernetes/cmd/kubelet (non-static) ... ... Logs: /tmp/etcd.log /tmp/kube-apiserver.log /tmp/kube-controller-manager.log /tmp/kube-proxy.log /tmp/kube-scheduler.log /tmp/kubelet.log To start using your cluster, you can open up another terminal/tab and run: export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig cluster/kubectl.sh Alternatively, you can write to the default kubeconfig: export KUBERNETES_PROVIDER=local cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt cluster/kubectl.sh config set-context local --cluster=local --user=myself cluster/kubectl.sh config use-context local cluster/kubectl.sh Once the local cluster is up and running, Kubernetes users can schedule pods with resources, and resize the pods via kubectl. An example of how to use this feature is illustrated in the following demo video. Example Use Cases Cloud-based Development Environment In this scenario, developers or development teams write their code locally but build and test their code in Kubernetes pods with consistent configs that reflect production use. Such pods need minimal resources when the developers are writing code, but need significantly more CPU and memory when they build their code or run a battery of tests. This use case can leverage in-place pod resize feature (with a little help from eBPF) to quickly resize the pod's resources and avoid kernel OOM (out of memory) killer from terminating their processes. This KubeCon North America 2022 conference talk illustrates the use case. Java processes initialization CPU requirements Some Java applications may need significantly more CPU during initialization than what is needed during normal process operation time. If such applications specify CPU requests and limits suited for normal operation, they may suffer from very long startup times. Such pods can request higher CPU values at the time of pod creation, and can be resized down to normal running needs once the application has finished initializing. Known Issues This feature enters v1.27 at alpha stage . Below are a few known issues users may encounter: containerd versions below v1.6.9 do not have the CRI support needed for full end-to-end operation of this feature. Attempts to resize pods will appear to be stuck in the InProgress state, and resources field in the pod's status are never updated even though the new resources may have been enacted on the running containers. Pod resize may encounter a race condition with other pod updates, causing delayed enactment of pod resize. Reflecting the resized container resources in pod's status may take a while. Static CPU management policy is not supported with this feature. Credits This feature is a result of the efforts of a very collaborative Kubernetes community. Here's a little shoutout to just a few of the many many people that contributed countless hours of their time and helped make this happen. @thockin for detail-oriented API design and air-tight code reviews. @derekwaynecarr for simplifying the design and thorough API and node reviews. @dchen1107 for bringing vast knowledge from Borg and helping us avoid pitfalls. @ruiwen-zhao for adding containerd support that enabled full E2E implementation. @wangchen615 for implementing comprehensive E2E tests and driving scheduler fixes. @bobbypage for invaluable help getting CI ready and quickly investigating issues, covering for me on my vacation. @Random-Liu for thorough kubelet reviews and identifying problematic race conditions. @Huang-Wei , @ahg-g , @alculquicondor for helping get scheduler changes done. @mikebrow @marosset for reviews on short notice that helped CRI changes make it into v1.25. @endocrimes , @ehashman for helping ensure that the oft-overlooked tests are in good shape. @mrunalp for reviewing cgroupv2 changes and ensuring clean handling of v1 vs v2. @liggitt , @gjkim42 for tracking down, root-causing important missed issues post-merge. @SergeyKanzhelev for supporting and shepherding various issues during the home stretch. @pdgetrf for making the first prototype a reality. @dashpole for bringing me up to speed on 'the Kubernetes way' of doing things. @bsalamat , @kgolab for very thoughtful insights and suggestions in the early stages. @sftim , @tengqm for ensuring docs are easy to follow. @dims for being omnipresent and helping make merges happen at critical hours. Release teams for ensuring that the project stayed healthy. And a big thanks to my very supportive management Dr. Xiaoning Ding and Dr. Ying Xiong for their patience and encouragement. References For app developers Resize CPU and Memory Resources assigned to Containers Assign Memory Resources to Containers and Pods Assign CPU Resources to Containers and Pods For cluster administrators Configure Default Memory Requests and Limits for a Namespace Configure Default CPU Requests and Limits for a Namespace
·kubernetes.io·
Blog: Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)
When Your Employee Tells You They’re Burned Out
When Your Employee Tells You They’re Burned Out
Burnout is affecting both leaders and employees — and contributing to a talent shortage that’s challenging and costly to navigate. It can be challenging for even the most enlightened managers to have conversations about employee burnout while managing the needs of the business. The author offers five steps to take when an employee comes to you expressing burnout: 1) Treat their concerns seriously; 2) Understand their experience of burnout; 3) Identify its root causes; 4) Consider short- and long-term solutions; and 5) Create a monitoring plan.
·hbr.org·
When Your Employee Tells You They’re Burned Out
Blog: Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services
Blog: Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services
Author: Xu Zhenglun (Alibaba) In Kubernetes, a Service can be used to provide a unified traffic endpoint for applications running on a set of Pods. Clients can use the virtual IP address (or VIP ) provided by the Service for access, and Kubernetes provides load balancing for traffic accessing different back-end Pods, but a ClusterIP type of Service is limited to providing access to nodes within the cluster, while traffic from outside the cluster cannot be routed. One way to solve this problem is to use a type: NodePort Service, which sets up a mapping to a specific port of all nodes in the cluster, thus redirecting traffic from the outside to the inside of the cluster. How Kubernetes allocates node ports to Services? When a type: NodePort Service is created, its corresponding port(s) are allocated in one of two ways: Dynamic : If the Service type is NodePort and you do not set a nodePort value explicitly in the spec for that Service, the Kubernetes control plane will automatically allocate an unused port to it at creation time. Static : In addition to the dynamic auto-assignment described above, you can also explicitly assign a port that is within the nodeport port range configuration. The value of nodePort that you manually assign must be unique across the whole cluster. Attempting to create a Service of type: NodePort where you explicitly specify a node port that was already allocated results in an error. Why do you need to reserve ports of NodePort Service? Sometimes, you may want to have a NodePort Service running on well-known ports so that other components and users inside o r outside the cluster can use them. In some complex cluster deployments with a mix of Kubernetes nodes and other servers on the same network, it may be necessary to use some pre-defined ports for communication. In particular, some fundamental components cannot rely on the VIPs that back type: LoadBalancer Services because the virtual IP address mapping implementation for that cluster also relies on these foundational components. Now suppose you need to expose a Minio object storage service on Kubernetes to clients running outside the Kubernetes cluster, and the agreed port is 30009 , we need to create a Service as follows: apiVersion : v1 kind : Service metadata : name : minio spec : ports : - name : api nodePort : 30009 port : 9000 protocol : TCP targetPort : 9000 selector : app : minio type : NodePort However, as mentioned before, if the port (30009) required for the minio Service is not reserved, and another type: NodePort (or possibly type: LoadBalancer ) Service is created and dynamically allocated before or concurrently with the minio Service, TCP port 30009 might be allocated to that other Service; if so, creation of the minio Service will fail due to a node port collision. How can you avoid NodePort Service port conflicts? Kubernetes 1.24 introduced changes for type: ClusterIP Services, dividing the CIDR range for cluster IP addresses into two blocks that use different allocation policies to reduce the risk of conflicts . In Kubernetes 1.27, as an alpha feature, you can adopt a similar policy for type: NodePort Services. You can enable a new feature gate ServiceNodePortStaticSubrange . Turning this on allows you to use a different port allocation strategy for type: NodePort Services, and reduce the risk of collision. The port range for NodePort will be divided, based on the formula min(max(16, nodeport-size / 32), 128) . The outcome of the formula will be a number between 16 and 128, with a step size that increases as the size of the nodeport range increases. The outcome of the formula determine that the size of static port range. When the port range is less than 16, the size of static port range will be set to 0, which means that all ports will be dynamically allocated. Dynamic port assignment will use the upper band by default, once this has been exhausted it will use the lower range. This will allow users to use static allocations on the lower band with a low risk of collision. Examples default range: 30000-32767 Range properties Values service-node-port-range 30000-32767 Band Offset min(max(16, 2768/32), 128) = min(max(16, 86), 128) = min(86, 128) = 86 Static band start 30000 Static band end 30085 Dynamic band start 30086 Dynamic band end 32767 pie showData title 30000-32767 "Static" : 86 "Dynamic" : 2682 JavaScript must be enabled to view this content very small range: 30000-30015 Range properties Values service-node-port-range 30000-30015 Band Offset 0 Static band start - Static band end - Dynamic band start 30000 Dynamic band end 30015 pie showData title 30000-30015 "Static" : 0 "Dynamic" : 16 JavaScript must be enabled to view this content small(lower boundary) range: 30000-30127 Range properties Values service-node-port-range 30000-30127 Band Offset min(max(16, 128/32), 128) = min(max(16, 4), 128) = min(16, 128) = 16 Static band start 30000 Static band end 30015 Dynamic band start 30016 Dynamic band end 30127 pie showData title 30000-30127 "Static" : 16 "Dynamic" : 112 JavaScript must be enabled to view this content large(upper boundary) range: 30000-34095 Range properties Values service-node-port-range 30000-34095 Band Offset min(max(16, 4096/32), 128) = min(max(16, 128), 128) = min(128, 128) = 128 Static band start 30000 Static band end 30127 Dynamic band start 30128 Dynamic band end 34095 pie showData title 30000-34095 "Static" : 128 "Dynamic" : 3968 JavaScript must be enabled to view this content very large range: 30000-38191 Range properties Values service-node-port-range 30000-38191 Band Offset min(max(16, 8192/32), 128) = min(max(16, 256), 128) = min(256, 128) = 128 Static band start 30000 Static band end 30127 Dynamic band start 30128 Dynamic band end 38191 pie showData title 30000-38191 "Static" : 128 "Dynamic" : 8064 JavaScript must be enabled to view this content
·kubernetes.io·
Blog: Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services
Building the Micro Mirror Free Software CDN
Building the Micro Mirror Free Software CDN
As should surprise no one, based on my past projects of running my own autonomous system , building my own Internet Exchange Point , and bui...
·blog.thelifeofkenneth.com·
Building the Micro Mirror Free Software CDN