Attached: 1 image Hey @linuxfoundation@social.lfx.dev why are you sending takedowns on redbubble for generic Unix terms and project names you don't own?
Generative AI exploded onto the scene so quickly that many developers haven’t been able to catch up with new technical concepts in Generative AI. Whether you’re a builder without an AI/ML background, or you’re feeling like you’ve “missed the boat,” this glossary is for you!
Blog: Kubernetes v1.28: Retroactive Default StorageClass move to GA
Author: Roman Bednář (Red Hat)
Announcing graduation to General Availability (GA) - Retroactive Default StorageClass Assignment in Kubernetes v1.28!
Kubernetes SIG Storage team is thrilled to announce that the "Retroactive Default StorageClass Assignment" feature,
introduced as an alpha in Kubernetes v1.25, has now graduated to GA and is officially part of the Kubernetes v1.28 release.
This enhancement brings a significant improvement to how default
StorageClasses are assigned to PersistentVolumeClaims (PVCs).
With this feature enabled, you no longer need to create a default StorageClass first and then a PVC to assign the class.
Instead, any PVCs without a StorageClass assigned will now be retroactively updated to include the default StorageClass.
This enhancement ensures that PVCs no longer get stuck in an unbound state, and storage provisioning works seamlessly,
even when a default StorageClass is not defined at the time of PVC creation.
What changed?
The PersistentVolume (PV) controller has been modified to automatically assign a default StorageClass to any unbound
PersistentVolumeClaim with the storageClassName not set. Additionally, the PersistentVolumeClaim
admission validation mechanism within
the API server has been adjusted to allow changing values from an unset state to an actual StorageClass name.
How to use it?
As this feature has graduated to GA, there's no need to enable a feature gate anymore.
Simply make sure you are running Kubernetes v1.28 or later, and the feature will be available for use.
For more details, read about
default StorageClass assignment in the Kubernetes documentation.
You can also read the previous blog post announcing beta graduation in v1.26.
To provide feedback, join our Kubernetes Storage Special-Interest-Group (SIG)
or participate in discussions on our public Slack channel .
Whoa!!! University of Chicago agrees to pay $13.5 million to students after being accused of participating in a 'price-fixing cartel' with other prestigious schools to limit financial aid
After nearly two years of litigation, UChicago settled claims it conspired with top colleges including Brown and Yale to limit financial aid packages.
Readers have been pointing us to HashiCorp's announcement
that it is moving to its own "Business Source License" for some of its
(formerly) open-source products. Like other companies (example) that have taken this path, HashiCorp
is removing the freedom to use its products commercially in ways that it
sees as competitive. This is, in a real sense, an old and tiresome story.
White House orders federal agencies to shore up cybersecurity, warns of potential exposure | CNN
The White House ordered federal agencies to shore up their cybersecurity after agencies have lagged in implementing a key executive order President Joe Biden issued in 2021, according to a memo first obtained by CNN.
Blog: Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA
Authors: Xing Yang (VMware) and Ashutosh Kumar (Elastic)
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
It was introduced as
alpha
in Kubernetes v1.24, and promoted to
beta
in Kubernetes v1.26.
This feature allows stateful workloads to restart on a different node if the
original node is shutdown unexpectedly or ends up in a non-recoverable state
such as the hardware failure or unresponsive OS.
What is a Non-Graceful Node Shutdown
In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
unexpectedly because of reasons such as power outage or something else external.
A node shutdown could lead to workload failure if the node is not drained
before the shutdown. A node shutdown can be either graceful or non-graceful.
The Graceful Node Shutdown
feature allows Kubelet to detect a node shutdown event, properly terminate the pods,
and release resources, before the actual shutdown.
When a node is shutdown but not detected by Kubelet's Node Shutdown Manager,
this becomes a non-graceful node shutdown.
Non-graceful node shutdown is usually not a problem for stateless apps, however,
it is a problem for stateful apps.
The stateful application cannot function properly if the pods are stuck on the
shutdown node and are not restarting on a running node.
In the case of a non-graceful node shutdown, you can manually add an out-of-service taint on the Node.
kubectl taint nodes node-name node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
This taint triggers pods on the node to be forcefully deleted if there are no
matching tolerations on the pods. Persistent volumes attached to the shutdown node
will be detached, and new pods will be created successfully on a different running
node.
Note: Before applying the out-of-service taint, you must verify that a node is
already in shutdown or power-off state (not in the middle of restarting).
Once all the workload pods that are linked to the out-of-service node are moved to
a new running node, and the shutdown node has been recovered, you should remove that
taint on the affected node after the node is recovered.
What’s new in stable
With the promotion of the Non-Graceful Node Shutdown feature to stable, the
feature gate NodeOutOfServiceVolumeDetach is locked to true on
kube-controller-manager and cannot be disabled.
Metrics force_delete_pods_total and force_delete_pod_errors_total in the
Pod GC Controller are enhanced to account for all forceful pods deletion.
A reason is added to the metric to indicate whether the pod is forcefully deleted
because it is terminated, orphaned, terminating with the out-of-service taint,
or terminating and unscheduled.
A "reason" is also added to the metric attachdetach_controller_forced_detaches
in the Attach Detach Controller to indicate whether the force detach is caused by
the out-of-service taint or a timeout.
What’s next?
This feature requires a user to manually add a taint to the node to trigger
workloads failover and remove the taint after the node is recovered.
In the future, we plan to find ways to automatically detect and fence nodes
that are shutdown/failed and automatically failover workloads to another node.
How can I learn more?
Check out additional documentation on this feature
here .
How to get involved?
We offer a huge thank you to all the contributors who helped with design,
implementation, and review of this feature and helped move it from alpha, beta, to stable:
Michelle Au (msau42 )
Derek Carr (derekwaynecarr )
Danielle Endocrimes (endocrimes )
Baofa Fan (carlory )
Tim Hockin (thockin )
Ashutosh Kumar (sonasingh46 )
Hemant Kumar (gnufied )
Yuiko Mouri (YuikoTakada )
Mrunal Patel (mrunalp )
David Porter (bobbypage )
Yassine Tijani (yastij )
Jing Xu (jingxu97 )
Xing Yang (xing-yang )
This feature is a collaboration between SIG Storage and SIG Node.
For those interested in getting involved with the design and development of any
part of the Kubernetes Storage system, join the Kubernetes Storage Special
Interest Group (SIG).
For those interested in getting involved with the design and development of the
components that support the controlled interactions between pods and host
resources, join the Kubernetes Node SIG.
Did I mention I switched back to Chrome? Brave would fail me in ways that were hard to pin down | Google Chrome will summarize entire articles for you with built-in generative AI
The AI-powered notes in Chrome are launching first on Android and iOS.
You don't hate JIRA, you hate your manager - Derek Jarvis' Blog
It seems like it has become popular to hate on JIRA. In fact, a good friend of mine sent me this, which is what triggered this post: (if you're the owner of the image, reach out to me and I'll attribute it properly) I'm usually…
Author : Marko Mudrinić (Kubermatic)
On behalf of Kubernetes SIG Release, I am very excited to introduce the
Kubernetes community-owned software
repositories for Debian and RPM packages: pkgs.k8s.io ! The new package
repositories are replacement for the Google-hosted package repositories
(apt.kubernetes.io and yum.kubernetes.io ) that we've been using since
Kubernetes v1.5.
This blog post contains information about these new package repositories,
what does it mean to you as an end user, and how to migrate to the new
repositories.
What you need to know about the new package repositories?
This is an opt-in change ; you're required to manually migrate from the
Google-hosted repository to the Kubernetes community-owned repositories.
See how to migrate later in this announcement for migration information
and instructions.
Access to the Google-hosted repository will remain intact for the foreseeable
future. However, the Kubernetes project plans to stop publishing packages to
the Google-hosted repository in the future. The project strongly recommends
migrating to the Kubernetes package repositories going forward.
The Kubernetes package repositories contain packages beginning with those
Kubernetes versions that were still under support when the community took
over the package builds. This means that anything before v1.24.0 will only be
available in the Google-hosted repository.
There's a dedicated package repository for each Kubernetes minor version.
When upgrading to a different minor release, you must bear in mind that
the package repository details also change.
Why are we introducing new package repositories?
As the Kubernetes project is growing, we want to ensure the best possible
experience for the end users. The Google-hosted repository has been serving
us well for many years, but we started facing some problems that require
significant changes to how we publish packages. Another goal that we have is to
use community-owned infrastructure for all critical components and that
includes package repositories.
Publishing packages to the Google-hosted repository is a manual process that
can be done only by a team of Google employees called
Google Build Admins .
The Kubernetes Release Managers team
is a very diverse team especially in terms of timezones that we work in.
Given this constraint, we have to do very careful planning for every release to
ensure that we have both Release Manager and Google Build Admin available to
carry out the release.
Another problem is that we only have a single package repository. Because of
this, we were not able to publish packages for prerelease versions (alpha,
beta, and rc). This made testing Kubernetes prereleases harder for anyone who
is interested to do so. The feedback that we receive from people testing these
releases is critical to ensure the best quality of releases, so we want to make
testing these releases as easy as possible. On top of that, having only one
repository limited us when it comes to publishing dependencies like cri-tools
and kubernetes-cni .
Regardless of all these issues, we're very thankful to Google and Google Build
Admins for their involvement, support, and help all these years!
How the new package repositories work?
The new package repositories are hosted at pkgs.k8s.io for both Debian and
RPM packages. At this time, this domain points to a CloudFront CDN backed by S3
bucket that contains repositories and packages. However, we plan on onboarding
additional mirrors in the future, giving possibility for other companies to
help us with serving packages.
Packages are built and published via the OpenBuildService (OBS) platform .
After a long period of evaluating different solutions, we made a decision to
use OpenBuildService as a platform to manage our repositories and packages.
First of all, OpenBuildService is an open source platform used by a large
number of open source projects and companies, like openSUSE, VideoLAN,
Dell, Intel, and more. OpenBuildService has many features making it very
flexible and easy to integrate with our existing release tooling. It also
allows us to build packages in a similar way as for the Google-hosted
repository making the migration process as seamless as possible.
SUSE sponsors the Kubernetes project with access to their reference
OpenBuildService setup (build.opensuse.org ) and
with technical support to integrate OBS with our release processes.
We use SUSE's OBS instance for building and publishing packages. Upon building
a new release, our tooling automatically pushes needed artifacts and
package specifications to build.opensuse.org . That will trigger the build
process that's going to build packages for all supported architectures (AMD64,
ARM64, PPC64LE, S390X). At the end, generated packages will be automatically
pushed to our community-owned S3 bucket making them available to all users.
We want to take this opportunity to thank SUSE for allowing us to use
build.opensuse.org and their generous support to make this integration
possible!
What are significant differences between the Google-hosted and Kubernetes package repositories?
There are three significant differences that you should be aware of:
There's a dedicated package repository for each Kubernetes minor release.
For example, repository called core:/stable:/v1.28 only hosts packages for
stable Kubernetes v1.28 releases. This means you can install v1.28.0 from
this repository, but you can't install v1.27.0 or any other minor release
other than v1.28. Upon upgrading to another minor version, you have to add a
new repository and optionally remove the old one
There's a difference in what cri-tools and kubernetes-cni package
versions are available in each Kubernetes repository
These two packages are dependencies for kubelet and kubeadm
Kubernetes repositories for v1.24 to v1.27 have same versions of these
packages as the Google-hosted repository
Kubernetes repositories for v1.28 and onwards are going to have published
only versions that are used by that Kubernetes minor release
Speaking of v1.28, only kubernetes-cni 1.2.0 and cri-tols v1.28 are going
to be available in the repository for Kubernetes v1.28
Similar for v1.29, we only plan on publishing cri-tools v1.29 and
whatever kubernetes-cni version is going to be used by Kubernetes v1.29
The revision part of the package version (the -00 part in 1.28.0-00 ) is
now autogenerated by the OpenBuildService platform and has a different format.
The revision is now in the format of -x.y , e.g. 1.28.0-1.1
Does this in any way affect existing Google-hosted repositories?
The Google-hosted repository and all packages published to it will continue
working in the same way as before. There are no changes in how we build and
publish packages to the Google-hosted repository, all newly-introduced changes
are only affecting packages publish to the community-owned repositories.
However, as mentioned at the beginning of this blog post, we plan to stop
publishing packages to the Google-hosted repository in the future.
How to migrate to the Kubernetes community-owned repositories?
Debian, Ubuntu, and operating systems using apt /apt-get
Replace the apt repository definition so that apt points to the new
repository instead of the Google-hosted repository. Make sure to replace the
Kubernetes minor version in the command below with the minor version
that you're currently using:
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Download the public signing key for the Kubernetes package repositories.
The same signing key is used for all repositories, so you can disregard the
version in the URL:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Update the apt package index:
sudo apt-get update
CentOS, Fedora, RHEL, and operating systems using rpm /dnf
Replace the yum repository definition so that yum points to the new
repository instead of the Google-hosted repository. Make sure to replace the
Kubernetes minor version in the command below with the minor version
that you're currently using:
cat EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Can I rollback to the Google-hosted repository after migrating to the Kubernetes repositories?
In general, yes. Just do the same steps as when migrating, but use parameters
for the Google-hosted repository. You can find those parameters in a document
like "Installing kubeadm" .
Why isn’t there a stable list of domains/IPs? Why can’t I restrict package downloads?
Our plan for pkgs.k8s.io is to make it work as a redirector to a set of
backends (package mirrors) based on user's location. The nature of this change
means that a user downloading a package could be redirected to any mirror at
any time. Given the architecture and our plans to onboard additional mirrors in
the near future, we can't provide a list of IP addresses or domains that you
can add to an allow list.
Restrictive control mechanisms like man-in-the-middle proxies or network
policies that restrict access to a specific list of IPs/domains will break with
this change. For these scenarios, we encourage you to mirror the release
packages to a local package repository that you have strict control over.
What should I do if I detect some abnormality with the new repositories?
If you encounter any issue with new Kubernetes package repositories, please
file an issue in the
kubernetes/release repository .