Open source and Linux skills are still in demand in a dark economy
Despite the doom-and-gloom headlines about tech jobs, the Linux Foundation's latest survey says companies are still hiring savvy Linux and open source staffers.
A Case for SPIFFE & SPIRE - Software Identity Management
How do you manage software identity for your Kubernetes workloads, Kubernetes nodes, virtual machines (VMs) and other software entities? In this video, Lukonde Mwila explains the concepts behind SPIFFE and SPIRE, an open source universal security standard for managing software identity.
SPIFFE & SPIRE: https://spiffe.io/
#AWS #Kubernetes #EKS
TurboTax is sending checks to 4.4 million customers as part of a $141 million settlement | CNN Business
Roughly 4.4 million people will soon receive checks from TurboTax, following a 50-state settlement with parent company Intuit for allegedly steering millions of low-income Americans away from free tax-filing services.
Justice Department Announces Court-Authorized Disruption of the Snake Malware Network Controlled by Russia's Federal Security Service
“Russia used sophisticated malware to steal sensitive information from our allies, laundering it through a network of infected computers in the United States in a cynical attempt to conceal their crimes. Meeting the challenge of cyberespionage requires creativity and a willingness to use all lawful means to protect our nation and our allies,” stated United States Attorney Peace. “The court-authorized remote search and remediation announced today demonstrates my Office and our partners’ commitment to using all of the tools at our disposal to protect the American people.”
To become more diverse, equitable, and inclusive, many companies have turned to unconscious bias (UB) training. By raising awareness of the mental shortcuts that lead to snap judgments—often based on race and gender—about people’s talents or character, it strives to make hiring and promotion fairer and improve interactions with customers and among colleagues. But most UB training is ineffective, research shows. The problem is, increasing awareness is not enough—and can even backfire—because sending the message that bias is involuntary and widespread may make it seem unavoidable. UB training that gets results, in contrast, teaches attendees to manage their biases, practice new behaviors, and track their progress. It gives them information that contradicts stereotypes and allows them to connect with colleagues whose experiences are different from theirs. And it’s not a onetime session; it entails a longer journey and structural organizational changes. In this article the authors describe how rigorous UB programs at Microsoft, Starbucks, and other organizations help employees overcome denial and act on their awareness, develop the empathy that combats bias, diversify their networks, and commit to improvement.
Bcachefs Submitted For Review - Next-Gen CoW File-System Aims For Mainline
In development for over a half-decade already has been Bcachefs as a copy-on-write (CoW) file-system born originally out of the Linux kernel's block cache code
Blog: Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)
Author: Vinay Kulkarni (Kubescaler Labs)
If you have deployed Kubernetes pods with CPU and/or memory resources
specified, you may have noticed that changing the resource values involves
restarting the pod. This has been a disruptive operation for running
workloads... until now.
In Kubernetes v1.27, we have added a new alpha feature that allows users
to resize CPU/memory resources allocated to pods without restarting the
containers. To facilitate this, the resources field in a pod's containers
now allow mutation for cpu and memory resources. They can be changed
simply by patching the running pod spec.
This also means that resources field in the pod spec can no longer be
relied upon as an indicator of the pod's actual resources. Monitoring tools
and other such applications must now look at new fields in the pod's status.
Kubernetes queries the actual CPU and memory requests and limits enforced on
the running containers via a CRI (Container Runtime Interface) API call to the
runtime, such as containerd, which is responsible for running the containers.
The response from container runtime is reflected in the pod's status.
In addition, a new restartPolicy for resize has been added. It gives users
control over how their containers are handled when resources are resized.
What's new in v1.27?
Besides the addition of resize policy in the pod's spec, a new field named
allocatedResources has been added to containerStatuses in the pod's status.
This field reflects the node resources allocated to the pod's containers.
In addition, a new field called resources has been added to the container's
status. This field reflects the actual resource requests and limits configured
on the running containers as reported by the container runtime.
Lastly, a new field named resize has been added to the pod's status to show the
status of the last requested resize. A value of Proposed is an acknowledgement
of the requested resize and indicates that request was validated and recorded. A
value of InProgress indicates that the node has accepted the resize request
and is in the process of applying the resize request to the pod's containers.
A value of Deferred means that the requested resize cannot be granted at this
time, and the node will keep retrying. The resize may be granted when other pods
leave and free up node resources. A value of Infeasible is a signal that the
node cannot accommodate the requested resize. This can happen if the requested
resize exceeds the maximum resources the node can ever allocate for a pod.
When to use this feature
Here are a few examples where this feature may be useful:
Pod is running on node but with either too much or too little resources.
Pods are not being scheduled do to lack of sufficient CPU or memory in a
cluster that is underutilized by running pods that were overprovisioned.
Evicting certain stateful pods that need more resources to schedule them
on bigger nodes is an expensive or disruptive operation when other lower
priority pods in the node can be resized down or moved.
How to use this feature
In order to use this feature in v1.27, the InPlacePodVerticalScaling
feature gate must be enabled. A local cluster with this feature enabled
can be started as shown below:
root@vbuild:~/go/src/k8s.io/kubernetes# FEATURE_GATES=InPlacePodVerticalScaling=true ./hack/local-up-cluster.sh
go version go1.20.2 linux/arm64
+++ [0320 13:52:02] Building go targets for linux/arm64
k8s.io/kubernetes/cmd/kubectl (static)
k8s.io/kubernetes/cmd/kube-apiserver (static)
k8s.io/kubernetes/cmd/kube-controller-manager (static)
k8s.io/kubernetes/cmd/cloud-controller-manager (non-static)
k8s.io/kubernetes/cmd/kubelet (non-static)
...
...
Logs:
/tmp/etcd.log
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log
/tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log
To start using your cluster, you can open up another terminal/tab and run:
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
cluster/kubectl.sh
Alternatively, you can write to the default kubeconfig:
export KUBERNETES_PROVIDER=local
cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
cluster/kubectl.sh config set-context local --cluster=local --user=myself
cluster/kubectl.sh config use-context local
cluster/kubectl.sh
Once the local cluster is up and running, Kubernetes users can schedule pods
with resources, and resize the pods via kubectl. An example of how to use this
feature is illustrated in the following demo video.
Example Use Cases
Cloud-based Development Environment
In this scenario, developers or development teams write their code locally
but build and test their code in Kubernetes pods with consistent configs
that reflect production use. Such pods need minimal resources when the
developers are writing code, but need significantly more CPU and memory
when they build their code or run a battery of tests. This use case can
leverage in-place pod resize feature (with a little help from eBPF) to
quickly resize the pod's resources and avoid kernel OOM (out of memory)
killer from terminating their processes.
This KubeCon North America 2022 conference talk
illustrates the use case.
Java processes initialization CPU requirements
Some Java applications may need significantly more CPU during initialization
than what is needed during normal process operation time. If such applications
specify CPU requests and limits suited for normal operation, they may suffer
from very long startup times. Such pods can request higher CPU values at the
time of pod creation, and can be resized down to normal running needs once the
application has finished initializing.
Known Issues
This feature enters v1.27 at alpha stage .
Below are a few known issues users may encounter:
containerd versions below v1.6.9 do not have the CRI support needed for full
end-to-end operation of this feature. Attempts to resize pods will appear
to be stuck in the InProgress state, and resources field in the pod's
status are never updated even though the new resources may have been enacted
on the running containers.
Pod resize may encounter a race condition with other pod updates, causing
delayed enactment of pod resize.
Reflecting the resized container resources in pod's status may take a while.
Static CPU management policy is not supported with this feature.
Credits
This feature is a result of the efforts of a very collaborative Kubernetes community.
Here's a little shoutout to just a few of the many many people that contributed
countless hours of their time and helped make this happen.
@thockin for detail-oriented API design and air-tight code reviews.
@derekwaynecarr for simplifying the design and thorough API and node reviews.
@dchen1107 for bringing vast knowledge from Borg and helping us avoid pitfalls.
@ruiwen-zhao for adding containerd support that enabled full E2E implementation.
@wangchen615 for implementing comprehensive E2E tests and driving scheduler fixes.
@bobbypage for invaluable help getting CI ready and quickly investigating issues, covering for me on my vacation.
@Random-Liu for thorough kubelet reviews and identifying problematic race conditions.
@Huang-Wei , @ahg-g , @alculquicondor for helping get scheduler changes done.
@mikebrow @marosset for reviews on short notice that helped CRI changes make it into v1.25.
@endocrimes , @ehashman for helping ensure that the oft-overlooked tests are in good shape.
@mrunalp for reviewing cgroupv2 changes and ensuring clean handling of v1 vs v2.
@liggitt , @gjkim42 for tracking down, root-causing important missed issues post-merge.
@SergeyKanzhelev for supporting and shepherding various issues during the home stretch.
@pdgetrf for making the first prototype a reality.
@dashpole for bringing me up to speed on 'the Kubernetes way' of doing things.
@bsalamat , @kgolab for very thoughtful insights and suggestions in the early stages.
@sftim , @tengqm for ensuring docs are easy to follow.
@dims for being omnipresent and helping make merges happen at critical hours.
Release teams for ensuring that the project stayed healthy.
And a big thanks to my very supportive management Dr. Xiaoning Ding
and Dr. Ying Xiong for their patience and encouragement.
References
For app developers
Resize CPU and Memory Resources assigned to Containers
Assign Memory Resources to Containers and Pods
Assign CPU Resources to Containers and Pods
For cluster administrators
Configure Default Memory Requests and Limits for a Namespace
Configure Default CPU Requests and Limits for a Namespace
Being a Mom Helps Me Protect Our Communication Infrastructure | NIST
Jeanne Quimby's kids are the reason she came up with her team’s idea for how to detect cybersecurity events on our U.S. critical communication infrastructure.
Burnout is affecting both leaders and employees — and contributing to a talent shortage that’s challenging and costly to navigate. It can be challenging for even the most enlightened managers to have conversations about employee burnout while managing the needs of the business. The author offers five steps to take when an employee comes to you expressing burnout: 1) Treat their concerns seriously; 2) Understand their experience of burnout; 3) Identify its root causes; 4) Consider short- and long-term solutions; and 5) Create a monitoring plan.
Blog: Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services
Author: Xu Zhenglun (Alibaba)
In Kubernetes, a Service can be used to provide a unified traffic endpoint for
applications running on a set of Pods. Clients can use the virtual IP address (or VIP ) provided
by the Service for access, and Kubernetes provides load balancing for traffic accessing
different back-end Pods, but a ClusterIP type of Service is limited to providing access to
nodes within the cluster, while traffic from outside the cluster cannot be routed.
One way to solve this problem is to use a type: NodePort Service, which sets up a mapping
to a specific port of all nodes in the cluster, thus redirecting traffic from the
outside to the inside of the cluster.
How Kubernetes allocates node ports to Services?
When a type: NodePort Service is created, its corresponding port(s) are allocated in one
of two ways:
Dynamic : If the Service type is NodePort and you do not set a nodePort
value explicitly in the spec for that Service, the Kubernetes control plane will
automatically allocate an unused port to it at creation time.
Static : In addition to the dynamic auto-assignment described above, you can also
explicitly assign a port that is within the nodeport port range configuration.
The value of nodePort that you manually assign must be unique across the whole cluster.
Attempting to create a Service of type: NodePort where you explicitly specify a node port that
was already allocated results in an error.
Why do you need to reserve ports of NodePort Service?
Sometimes, you may want to have a NodePort Service running on well-known ports
so that other components and users inside o r outside the cluster can use them.
In some complex cluster deployments with a mix of Kubernetes nodes and other servers on the same network,
it may be necessary to use some pre-defined ports for communication. In particular, some fundamental
components cannot rely on the VIPs that back type: LoadBalancer Services
because the virtual IP address mapping implementation for that cluster also relies on
these foundational components.
Now suppose you need to expose a Minio object storage service on Kubernetes to clients
running outside the Kubernetes cluster, and the agreed port is 30009 , we need to
create a Service as follows:
apiVersion : v1
kind : Service
metadata :
name : minio
spec :
ports :
- name : api
nodePort : 30009
port : 9000
protocol : TCP
targetPort : 9000
selector :
app : minio
type : NodePort
However, as mentioned before, if the port (30009) required for the minio Service is not reserved,
and another type: NodePort (or possibly type: LoadBalancer ) Service is created and dynamically
allocated before or concurrently with the minio Service, TCP port 30009 might be allocated to that
other Service; if so, creation of the minio Service will fail due to a node port collision.
How can you avoid NodePort Service port conflicts?
Kubernetes 1.24 introduced changes for type: ClusterIP Services, dividing the CIDR range for cluster
IP addresses into two blocks that use different allocation policies to reduce the risk of conflicts .
In Kubernetes 1.27, as an alpha feature, you can adopt a similar policy for type: NodePort Services.
You can enable a new feature gate
ServiceNodePortStaticSubrange . Turning this on allows you to use a different port allocation strategy
for type: NodePort Services, and reduce the risk of collision.
The port range for NodePort will be divided, based on the formula min(max(16, nodeport-size / 32), 128) .
The outcome of the formula will be a number between 16 and 128, with a step size that increases as the
size of the nodeport range increases. The outcome of the formula determine that the size of static port
range. When the port range is less than 16, the size of static port range will be set to 0,
which means that all ports will be dynamically allocated.
Dynamic port assignment will use the upper band by default, once this has been exhausted it will use the lower range.
This will allow users to use static allocations on the lower band with a low risk of collision.
Examples
default range: 30000-32767
Range properties
Values
service-node-port-range
30000-32767
Band Offset
min(max(16, 2768/32), 128)
= min(max(16, 86), 128)
= min(86, 128)
= 86
Static band start
30000
Static band end
30085
Dynamic band start
30086
Dynamic band end
32767
pie showData
title 30000-32767
"Static" : 86
"Dynamic" : 2682
JavaScript must be enabled to view this content
very small range: 30000-30015
Range properties
Values
service-node-port-range
30000-30015
Band Offset
0
Static band start
-
Static band end
-
Dynamic band start
30000
Dynamic band end
30015
pie showData
title 30000-30015
"Static" : 0
"Dynamic" : 16
JavaScript must be enabled to view this content
small(lower boundary) range: 30000-30127
Range properties
Values
service-node-port-range
30000-30127
Band Offset
min(max(16, 128/32), 128)
= min(max(16, 4), 128)
= min(16, 128)
= 16
Static band start
30000
Static band end
30015
Dynamic band start
30016
Dynamic band end
30127
pie showData
title 30000-30127
"Static" : 16
"Dynamic" : 112
JavaScript must be enabled to view this content
large(upper boundary) range: 30000-34095
Range properties
Values
service-node-port-range
30000-34095
Band Offset
min(max(16, 4096/32), 128)
= min(max(16, 128), 128)
= min(128, 128)
= 128
Static band start
30000
Static band end
30127
Dynamic band start
30128
Dynamic band end
34095
pie showData
title 30000-34095
"Static" : 128
"Dynamic" : 3968
JavaScript must be enabled to view this content
very large range: 30000-38191
Range properties
Values
service-node-port-range
30000-38191
Band Offset
min(max(16, 8192/32), 128)
= min(max(16, 256), 128)
= min(256, 128)
= 128
Static band start
30000
Static band end
30127
Dynamic band start
30128
Dynamic band end
38191
pie showData
title 30000-38191
"Static" : 128
"Dynamic" : 8064
JavaScript must be enabled to view this content
Blog: Kubernetes 1.27: Safer, More Performant Pruning in kubectl apply
Authors: Katrina Verey (independent) and Justin Santa Barbara (Google)
Declarative configuration management with the kubectl apply command is the gold standard approach
to creating or modifying Kubernetes resources. However, one challenge it presents is the deletion
of resources that are no longer needed. In Kubernetes version 1.5, the --prune flag was
introduced to address this issue, allowing kubectl apply to automatically clean up previously
applied resources removed from the current configuration.
Unfortunately, that existing implementation of --prune has design flaws that diminish its
performance and can result in unexpected behaviors. The main issue stems from the lack of explicit
encoding of the previously applied set by the preceding apply operation, necessitating
error-prone dynamic discovery. Object leakage, inadvertent over-selection of resources, and limited
compatibility with custom resources are a few notable drawbacks of this implementation. Moreover,
its coupling to client-side apply hinders user upgrades to the superior server-side apply
mechanism.
Version 1.27 of kubectl introduces an alpha version of a revamped pruning implementation that
addresses these issues. This new implementation, based on a concept called ApplySet , promises
better performance and safety.
An ApplySet is a group of resources associated with a parent object on the cluster, as
identified and configured through standardized labels and annotations. Additional standardized
metadata allows for accurate identification of ApplySet member objects within the cluster,
simplifying operations like pruning.
To leverage ApplySet-based pruning, set the KUBECTL_APPLYSET=true environment variable and include
the flags --prune and --applyset in your kubectl apply invocation:
KUBECTL_APPLYSET = true kubectl apply -f directory/ --prune --applyset= name
By default, ApplySet uses a Secret as the parent object. However, you can also use
a ConfigMap with the format --applyset=configmaps/name . If your desired Secret or
ConfigMap object does not yet exist, kubectl will create it for you. Furthermore, custom
resources can be enabled for use as ApplySet parent objects.
The ApplySet implementation is based on a new low-level specification that can support higher-level
ecosystem tools by improving their interoperability. The lightweight nature of this specification
enables these tools to continue to use existing object grouping systems while opting in to
ApplySet's metadata conventions to prevent inadvertent changes by other tools (such as kubectl ).
ApplySet-based pruning offers a promising solution to the shortcomings of the previous --prune
implementation in kubectl and can help streamline your Kubernetes resource management. Please
give this new feature a try and share your experiences with the community—ApplySet is under active
development, and your feedback is invaluable!
Additional resources
For more information how to use ApplySet-based pruning, read
Declarative Management of Kubernetes Objects Using Configuration Files in the Kubernetes documentation.
For a deeper dive into the technical design of this feature or to learn how to implement the
ApplySet specification in your own tools, refer to KEP 3659 :
ApplySet: kubectl apply --prune redesign and graduation strategy .
How do I get involved?
If you want to get involved in ApplySet development, you can get in touch with the developers at
SIG CLI . To provide feedback on the feature, please
file a bug
or request an enhancement
on the kubernetes/kubectl repository.
Siemens Open Source Manifesto: Our Commitment to the Open Source Ecosystem | Siemens Blog | Siemens
At Siemens, we strongly believe that Open Source software is key to our success. That's why we are thrilled to announce the launch of the Siemens Open…
Networking is one of the core pillars of Kubernetes, and the Special Interest
Group for Networking (SIG Network) is responsible for developing and maintaining
the networking features of Kubernetes. It covers all aspects to ensure
Kubernetes provides a reliable and scalable network infrastructure for
containerized applications.
In this SIG Network spotlight, Sujay Dey talked
with Shane Utt , Software Engineer at Kong, chair
of SIG Network and maintainer of Gateway API, on different aspects of the SIG,
what are the exciting things going on and how anyone can get involved and
contribute here.
Sujay : Hello, and first of all, thanks for the opportunity of learning more
about SIG Network. I would love to hear your story, so could you please tell us
a bit about yourself, your role, and how you got involved in Kubernetes,
especially in SIG Network?
Shane : Hello! Thank you for reaching out.
My Kubernetes journey started while I was working for a small data centre: we
were early adopters of Kubernetes and focused on using Kubernetes to provide
SaaS products. That experience led to my next position developing a distribution
of Kubernetes with a focus on networking. During this period in my career, I was
active in SIG Network (predominantly as a consumer).
When I joined Kong my role in the community changed significantly, as
Kong actively encourages upstream participation. I greatly increased my
engagement and contributions to the Gateway API project during those
years, and eventually became a maintainer.
I care deeply about this community and the future of our technology, so when a
chair position for the SIG became available, I volunteered my time immediately.
I’ve enjoyed working on Kubernetes over the better part of a decade and I want
to continue to do my part to ensure our community and technology continues to
flourish.
Sujay : I have to say, that was a truely inspiring journey! Now, let us talk
a bit more about SIG Network. Since we know it covers a lot of ground, could you
please highlight its scope and current focus areas?
Shane : For those who may be uninitiated: SIG Network is responsible for the
components, interfaces, and APIs which expose networking capabilities to
Kubernetes users and workloads. The charter is a pretty good
indication of our scope, but I can add some additional highlights on some of our
current areas of focus (this is a non-exhaustive list of sub-projects):
kube-proxy & KPNG
Those familiar with Kubernetes will know the Service API, which enables
exposing a group of pods over a network. The current standard implementation
of Service is known as kube-proxy , but what may be unfamiliar to people is
that there are a growing number of disparate alternative implementations on the
rise in recent years. To try and give provisions to these implementations (and
also provide some areas of alignment so that implementations do not become too
disparate from each other) upstream Kubernetes efforts are underway to create a
more modular public interface for kube-proxy . The intention is for
implementations to join in around a common set of libraries and speak a common
language. This area of focus is known as the KPNG project, and if this sounds
interesting to you, please join us in the KPNG community meetings and
#sig-network-kpng on Kubernetes Slack .
Multi-Network
Today one of the primary requirements for Kubernetes networking is to achieve
connectivity between pods in a cluster, satisfying a large number of
Kubernetes end-users. However, some use cases require isolated networks and
special interfaces for performance-oriented needs (e.g. AF_XDP , memif ,
SR-IOV ). There’s a growing need for special networking configurations in
Kubernetes in general. The Multi-Network project exists to improve the
management of multiple different networks for pods: anyone interested in some
of the lower-level details of pod networking (or anyone having relevant use
cases) can join us in the Multi-Network community meetings and
#sig-network-multi-network on Kubernetes Slack.
Network Policy
The NetworkPolicy API sub-group was formed to address network security beyond
the well-known version 1 of the NetworkPolicy resource. We’ve also been
working on the AdminNetworkPolicy resource (previously known as
ClusterNetworkPolicy ) to provide cluster administrator-focused functionality.
The network policy sub-project is a great place to join in if you’re
particularly interested in security and CNI, please feel free to join our
community meetings and the #sig-network-policy-api channel on Kubernetes
Slack.
Gateway API
If you’re specially interested in ingress or mesh networking the Gateway
API may be a sub-project you would enjoy. In Gateway API , we’re actively
developing the successor to the illustrious Ingress API, which includes a
Gateway resource which defines the addresses and listeners of the gateway and
various routing types (e.g. HTTPRoute , GRPCRoute , TLSRoute , TCPRoute ,
UDPRoute , etc.) that attach to Gateways. We also have an initiative within
this project called GAMMA, geared towards using Gateway API resources in a mesh
network context. There are some up-and-coming side projects within Gateway API
as well, including ingress2gateway which is a tool for compiling existing
Ingress objects to equivalent Gateway API resources, and Blixt, a Layer4
implementation of Gateway API using Rust/eBPF for the data plane, intended as a
testing and reference implementation. If this sounds interesting, we would love
to have readers join us in our Gateway API community meetings and
#sig-network-gateway-api on Kubernetes Slack.
Sujay : Couldn’t agree more! That was a very informative description, thanks
for highlighting them so nicely. As you have already mentioned about the SIG
channels to get involved, would you like to add anything about where people like
beginners can jump in and contribute?
Shane : For help getting started Kubernetes Slack is a great place
to talk to community members and includes several #sig-network-project
channels as well as our main #sig-network channel. Also, check for issues
labelled good-first-issue if you prefer to just dive right into the
repositories. Let us know how we can help you!
Sujay : What skills are contributors to SIG Network likely to learn?
Shane : To me, it feels limitless. Practically speaking, it’s very much up to
the individual what they want to learn. However, if you just intend to learn
as much as you possibly can about networking, SIG Network is a great place to
join in and grow your knowledge.
If you’ve ever wondered how Kubernetes Service API works or wanted to
implement an ingress controller, this is a great place to join in. If you wanted
to dig down deep into the inner workings of CNI, or how the network interfaces
at the pod level are configured, you can do that here as well.
We have an awesome and diverse community of people from just about every kind of
background you can imagine. This is a great place to share ideas and raise
proposals, improving your skills in design, as well as alignment and consensus
building.
There’s a wealth of opportunities here in SIG Network. There are lots of places
to jump in, and the learning opportunities are boundless.
Sujay : Thanks a lot! It was a really great discussion, we got to know so
many great things about SIG Network. I’m sure that many others will find this
just as useful as I did.