Suggested Reads

Suggested Reads

54794 bookmarks
Newest
Since I somehow made it to Paris and London in the first half of this year. Plus I love these LEGO Architecture sets. I did want to be an architect as a kid plus I didnt have a lot of LEGO growing up. #LEGO #Paris #London
Since I somehow made it to Paris and London in the first half of this year. Plus I love these LEGO Architecture sets. I did want to be an architect as a kid plus I didnt have a lot of LEGO growing up. #LEGO #Paris #London

Since I somehow made it to Paris and London in the first half of this year. Plus, I love these LEGO Architecture sets. I did want to be an architect as a kid plus I didn’t have a lot of LEGO growing up. #LEGO #Paris #London

June 08, 2024 at 08:57AM

via Instagram https://instagr.am/p/C79J1hdR32Y/

·instagr.am·
Since I somehow made it to Paris and London in the first half of this year. Plus I love these LEGO Architecture sets. I did want to be an architect as a kid plus I didnt have a lot of LEGO growing up. #LEGO #Paris #London
10 Years of Kubernetes
10 Years of Kubernetes

10 Years of Kubernetes

https://kubernetes.io/blog/2024/06/06/10-years-of-kubernetes/

Ten (10) years ago, on June 6th, 2014, the first commit of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash and markdown kicked off the project we have today. Who could have predicted that 10 years later, Kubernetes would grow to become one of the largest Open Source projects to date with over 88,000 contributors from more than 8,000 companies, across 44 countries.

This milestone isn't just for Kubernetes but for the Cloud Native ecosystem that blossomed from it. There are close to 200 projects within the CNCF itself, with contributions from 240,000+ individual contributors and thousands more in the greater ecosystem. Kubernetes would not be where it is today without them, the 7M+ Developers, and the even larger user community that have all helped shape the ecosystem that it is today.

Kubernetes' beginnings - a converging of technologies

The ideas underlying Kubernetes started well before the first commit, or even the first prototype (which came about in 2013). In the early 2000s, Moore's Law was well in effect. Computing hardware was becoming more and more powerful at an incredibly fast rate. Correspondingly, applications were growing more and more complex. This combination of hardware commoditization and application complexity pointed to a need to further abstract software from hardware, and solutions started to emerge.

Like many companies at the time, Google was scaling rapidly, and its engineers were interested in the idea of creating a form of isolation in the Linux kernel. Google engineer Rohit Seth described the concept in an email in 2006:

We use the term container to indicate a structure against which we track and charge utilization of system resources like memory, tasks, etc. for a Workload.

Google's Borg system for managing application orchestration at scale had adopted Linux containers as they were developed in the mid-2000s. Since then, the company had also started working on a new version of the system called "Omega." Engineers at Google who were familiar with the Borg and Omega systems saw the popularity of containerization driven by Docker. They recognized not only the need for an open source container orchestration system but its "inevitability," as described by Brendan Burns in this blog post. That realization in the fall of 2013 inspired a small team to start working on a project that would later become Kubernetes. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville Aikas, Tim Hockin, Dawn Chen, Brian Grant, and Daniel Smith.

In March of 2013, a 5-minute lightning talk called "The future of Linux Containers," presented by Solomon Hykes at PyCon, introduced an upcoming open source tool called "Docker" for creating and using Linux Containers. Docker introduced a level of usability to Linux Containers that made them accessible to more users than ever before, and the popularity of Docker, and thus of Linux Containers, skyrocketed. With Docker making the abstraction of Linux Containers accessible to all, running applications in much more portable and repeatable ways was suddenly possible, but the question of scale remained.

Google's Borg system for managing application orchestration at scale had adopted Linux containers as they were developed in the mid-2000s. Since then, the company had also started working on a new version of the system called "Omega." Engineers at Google who were familiar with the Borg and Omega systems saw the popularity of containerization driven by Docker. They recognized not only the need for an open source container orchestration system but its "inevitability," as described by Brendan Burns in this blog post. That realization in the fall of 2013 inspired a small team to start working on a project that would later become Kubernetes. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville Aikas, Tim Hockin, Dawn Chen, Brian Grant, and Daniel Smith.

A decade of Kubernetes

Kubernetes' history begins with that historic commit on June 6th, 2014, and the subsequent announcement of the project in a June 10th keynote by Google engineer Eric Brewer at DockerCon 2014 (and its corresponding Google blog).

Over the next year, a small community of contributors, largely from Google and Red Hat, worked hard on the project, culminating in a version 1.0 release on July 21st, 2015. Alongside 1.0, Google announced that Kubernetes would be donated to a newly formed branch of the Linux Foundation called the Cloud Native Computing Foundation (CNCF).

Despite reaching 1.0, the Kubernetes project was still very challenging to use and understand. Kubernetes contributor Kelsey Hightower took special note of the project's shortcomings in ease of use and on July 7, 2016, he pushed the first commit of his famed "Kubernetes the Hard Way" guide.

The project has changed enormously since its original 1.0 release; experiencing a number of big wins such as Custom Resource Definitions (CRD) going GA in 1.16 or full dual stack support launching in 1.23 and community "lessons learned" from the removal of widely used beta APIs in 1.22 or the deprecation of Dockershim.

Some notable updates, milestones and events since 1.0 include:

December 2016 - Kubernetes 1.5introduces runtime pluggability with initial CRI support and alpha Windows node support. OpenAPI also appears for the first time, paving the way for clients to be able to discover extension APIs.

This release also introduced StatefulSets and PodDisruptionBudgets in Beta.

April 2017 — Introduction of Role-Based Access Controls or RBAC.

June 2017 — In Kubernetes 1.7, ThirdPartyResources or "TPRs" are replaced with CustomResourceDefinitions (CRDs).

December 2017 — Kubernetes 1.9 sees the Workloads API becoming GA (Generally Available). The release blog states: "Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback."

December 2018 — In 1.13, the Container Storage Interface (CSI) reaches GA, kubeadm tool for bootstrapping minimum viable clusters reaches GA, and CoreDNS becomes the default DNS server.

September 2019 — Custom Resource Definitions go GAin Kubernetes 1.16.

August 2020 — Kubernetes 1.19 increases the support window for releases to 1 year.

December 2020 — Dockershim is deprecated in 1.20

April 2021 — the Kubernetes release cadence changes from 4 releases per year to 3 releases per year.

July 2021 — Widely used beta APIs are removed in Kubernetes 1.22.

May 2022 — Kubernetes 1.24 sees beta APIs become disabled by default to reduce upgrade conflicts and removal of Dockershim, leading to widespread user confusion (we've since improved our communication!)

December 2022 — In 1.26, there was a significant batch and Job API overhaul that paved the way for better support for AI /ML / batch workloads.

PS: Curious to see how far the project has come for yourself? Check out this tutorial for spinning up a Kubernetes 1.0 cluster created by community members Carlos Santana, Amim Moises Salum Knabben, and James Spurin.

Kubernetes offers more extension points than we can count. Originally designed to work with Docker and only Docker, now you can plug in any container runtime that adheres to the CRI standard. There are other similar interfaces: CSI for storage and CNI for networking. And that's far from all you can do. In the last decade, whole new patterns have emerged, such as using

Custom Resource Definitions (CRDs) to support third-party controllers - now a huge part of the Kubernetes ecosystem.

The community building the project has also expanded immensely over the last decade. Using DevStats, we can see the incredible volume of contribution over the last decade that has made Kubernetes the second-largest open source project in the world:

88,474 contributors

15,121 code committers

4,228,347 contributions

158,530 issues

311,787 pull requests

Kubernetes today

Since its early days, the project has seen enormous growth in technical capability, usage, and contribution. The project is still actively working to improve and better serve its users.

In the upcoming 1.31 release, the project will celebrate the culmination of an important long-term project: the removal of in-tree cloud provider code. In this largest migration in Kubernetes history, roughly 1.5 million lines of code have been removed, reducing the binary sizes of core components by approximately 40%. In the project's early days, it was clear that extensibility would be key to success. However, it wasn't always clear how that extensibility should be achieved. This migration removes a variety of vendor-specific capabilities from the core Kubernetes code base. Vendor-specific capabilities can now be better served by other pluggable extensibility features or patterns, such as Custom Resource Definitions (CRDs) or API standards like the Gateway API. Kubernetes also faces new challenges in serving its vast user base, and the community is adapting accordingly. One example of this is the migration of image hosting to the new, community-owned registry.k8s.io. The egress bandwidth and costs of providing pre-compiled binary images for user consumption have become immense. This new registry change enables the community to continue providing these convenient images in more cost- and performance-efficient ways. Make sure you check out the blog post and update any automation you have to use registry.k8s.io!

The future of Kubernetes

A decade in, the future of Kubernetes still looks bright. The community is prioritizing changes that both improve the user experiences, and enhance the sustainability of the project. The world of application development continues to evolve, and Kubernetes is poised to change along with it.

In 2024, the advent of AI changed a once-niche workload type into one of prominent importance. Distributed computing

·kubernetes.io·
10 Years of Kubernetes
anchore/syft: CLI tool and library for generating a Software Bill of Materials from container images and filesystems
anchore/syft: CLI tool and library for generating a Software Bill of Materials from container images and filesystems

anchore/syft: CLI tool and library for generating a Software Bill of Materials from container images and filesystems

Syft A CLI tool and Go library for generating a Software Bill of Materials (SBOM) from container images and filesystems. Exceptional for vulnerability detection…

June 5, 2024 at 01:52PM

via Instapaper

·github.com·
anchore/syft: CLI tool and library for generating a Software Bill of Materials from container images and filesystems
I think people should’ve already intuited this but in case you weren’t | OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance (Gift Article)
I think people should’ve already intuited this but in case you weren’t | OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance (Gift Article)
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers.
·nytimes.com·
I think people should’ve already intuited this but in case you weren’t | OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance (Gift Article)
Last Week in Kubernetes Development - Week Ending June 2nd 2024
Last Week in Kubernetes Development - Week Ending June 2nd 2024

Week Ending June 2nd, 2024

https://lwkd.info/2024/20240603

Developer News

Kubernetes turns 10 this week! The KuberTENes Birthday Bash is happening on 6th June all across the world. Attend an event next to you to join in on the celebrations.

Carlos Santana started a Google document to collect KuberTENes trivia and timeline information. Help contribute to the doc or feel free to use it for organizing a KuberTENes party where you live!

Release Schedule

Next Deadline: Production Readiness Freeze, June 6th, 2024

We’re approaching the enhancements freeze deadline, with only two more weeks left. We have a total of 49 KEPs opted-in for the v1.31 release as of now. Don’t forget to talk to your SIG leads to get a lead-opted-in label if you’re planning to get your KEP shipped in v1.31. The production readiness freeze is coming up on 6th, one week before the enhancements freeze. Make sure that your KEP has a completed PRR questionnaire before the 6th to ensure enough time for the PRR team to review all the KEPs.

Featured PRs

124685: Make kubeadm independent from crictl

This PR proposes making kubeadm independent of the crictl binary. This simplifies kubeadm by eliminating the need for an extra tool and offers more flexibility by allowing users to choose their preferred CRI implementation. Kubeadm will use a built-in library (cri-client) to interact with the Container Runtime Interface (CRI) instead of relying on crictl. While crictl will still be available for one more kubeadm release (v1.31), it won’t be installed by default anymore. Users who need crictl after v1.31 will have to update their scripts to install it manually. This improvement streamlines kubeadm and offers more control over CRI interactions.

KEP of the Week

KEP 4580: Deprecate and remove Kubelet RunOnce mode

This KEP proposes to deprecate and remove kubelet’s RunOnce mode. RunOnce mode does not support any of the newer Pod features like init containers and the Pod lifecycle for RunOnce mode is not well defined. Podman addresses the same use case in a more well-supported way. RunOnce mode also doesn’t work when the kubelet is running in systemd mode.

This was first brought up way back in 2017, and is finally on track to being deprecated in v1.31.

Other Merges

Restore scheduler performance on big clusters to pre-1.30 speeds, by changing NodeToStatusMap; this will break existing PostFilter plugins

You can make a kube-proxy image on Windows

LoadBalancer will check new fields for status changes

Add a generic storage provider for future generic control planes

Audit log APF queue latency

Scheduler has livez and readyz endpoints

kubeadm uses the HealthzBindAddress, not localhost, and stops hiding unsupported klog flags

Handle filepaths with spaces passed to commands on Windows

Test Improvements: Add ability to set feature gates generically, container name completion, CBOR/JSON tests

Promotions

DevicePluginCDIDevices to GA

ServiceAccountTokenNodeBinding to beta

Version Updates

CSI Spec to v1.9.0

Subprojects and Dependency Updates

cloud-provider-aws v1.30.1: ensure that addresses are added in network device index order. Also v1.29.3, v1.28.6, v1.27.7, v1.26.12

kompose v1.34.0: expose container to host only with labels

etcd v3.5.14: add support for AllowedCN and AllowedHostname through config file

gRPC v1.64.1: fix use-after-free issue. Also v1.63.1

CRI-O v1.30.2: fix CVE-2024-5154. Also v1.29.5, v1.28.7

via Last Week in Kubernetes Development https://lwkd.info/

June 03, 2024 at 06:30AM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending June 2nd 2024
smithy-lang/smithy: Smithy is a protocol-agnostic interface definition language and set of tools for generating clients servers and documentation for any programming language.
smithy-lang/smithy: Smithy is a protocol-agnostic interface definition language and set of tools for generating clients servers and documentation for any programming language.

smithy-lang/smithy: Smithy is a protocol-agnostic interface definition language and set of tools for generating clients, servers, and documentation for any programming language.

Smithy CLI installed

June 4, 2024 at 02:26PM

via Instapaper

·github.com·
smithy-lang/smithy: Smithy is a protocol-agnostic interface definition language and set of tools for generating clients servers and documentation for any programming language.
Episode 124: Julia Ferraioli on Open Source Stories and Responsible Recognition for Open Source ...
Episode 124: Julia Ferraioli on Open Source Stories and Responsible Recognition for Open Source ...

Episode 124: Julia Ferraioli on Open Source Stories, and Responsible Recognition for Open Source ...

Guest Julia Ferraioli Panelists Richard Littauer | Justin Dorfman | Alyssa Wright Show Notes Hello and welcome to Sustain! The podcast where we talk about…

June 4, 2024 at 12:47PM

via Instapaper

·youtube.com·
Episode 124: Julia Ferraioli on Open Source Stories and Responsible Recognition for Open Source ...
What to Know About the Open Versus Closed Software Debate
What to Know About the Open Versus Closed Software Debate

What to Know About the Open Versus Closed Software Debate

What to Know About the Open Versus Closed Software Debate A.I. companies are divided over whether the technology should be freely available to anyone for…

June 4, 2024 at 12:35PM

via Instapaper

·nytimes.com·
What to Know About the Open Versus Closed Software Debate
Snowflake Community
Snowflake Community
Join our community of data professionals to learn, connect, share and innovate together
Detecting and Preventing Unauthorized User Access: Instructions
·community.snowflake.com·
Snowflake Community
The New Builders: Tom Callaway talks about Building OSS Culture at AWS
The New Builders: Tom Callaway talks about Building OSS Culture at AWS

The New Builders: Tom Callaway talks about Building OSS Culture at AWS

Amazon Web Services is not always known for its engagement with open source communities, but there are dedicated teams within AWS trying to change that…

June 3, 2024 at 12:14PM

via Instapaper

·youtube.com·
The New Builders: Tom Callaway talks about Building OSS Culture at AWS
From Makefile to Justfile (or Taskfile): Recipe Runner Replacement
From Makefile to Justfile (or Taskfile): Recipe Runner Replacement

From Makefile to Justfile (or Taskfile): Recipe Runner Replacement

Discover the power of modern recipe runners as we transition from the traditional Makefile to Justfile (or stick to Taskfile). This video guides you through the reasons behind the switch, showing how these newer tools can simplify your development workflows. You'll learn about Justfile syntax, its capabilities, and the benefits it offers for task automation and reproducibility.

Justfile #Taskfile #Makefile

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/ci-cd/from-makefile-to-justfile-or-taskfile-recipe-runner-replacement 🔗 just: https://github.com/casey/just 🎬 Say Goodbye to Makefile - Use Taskfile to Manage Tasks in CI/CD Pipelines and Locally: https://youtu.be/Z7EnwBaJzCk

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to just and Justfile 02:19 just Define Justfile and just Run It 07:47 just and Justfile Pros and Cons

via YouTube https://www.youtube.com/watch?v=hgNN2wOE7lc

·youtube.com·
From Makefile to Justfile (or Taskfile): Recipe Runner Replacement
Release Notes for Alpine 3.20.0 - Alpine Linux
Release Notes for Alpine 3.20.0 - Alpine Linux

Release Notes for Alpine 3.20.0 - Alpine Linux

Base System grub 2.12 When upgrading existing installations using grub on UEFI systems, make sure to update the installed bootloader before rebooting otherwise…

June 3, 2024 at 11:34AM

via Instapaper

·wiki.alpinelinux.org·
Release Notes for Alpine 3.20.0 - Alpine Linux