Link Sharing

Link Sharing

Ingress NGINX Retirement: What You Need to Know
Ingress NGINX Retirement: What You Need to Know

Ingress NGINX Retirement: What You Need to Know

https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/

To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX. Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.

We recommend migrating to one of the many alternatives. Consider migrating to Gateway API, the modern replacement for Ingress. If you must continue using Ingress, many alternative Ingress controllers are listed in the Kubernetes documentation. Continue reading for further information about the history and current state of Ingress NGINX, as well as next steps.

About Ingress NGINX

Ingress is the original user-friendly way to direct network traffic to workloads running on Kubernetes. (Gateway API is a newer way to achieve many of the same goals.) In order for an Ingress to work in your cluster, there must be an Ingress controller running. There are many Ingress controller choices available, which serve the needs of different users and use cases. Some are cloud-provider specific, while others have more general applicability.

Ingress NGINX was an Ingress controller, developed early in the history of the Kubernetes project as an example implementation of the API. It became very popular due to its tremendous flexibility, breadth of features, and independence from any particular cloud or infrastructure provider. Since those days, many other Ingress controllers have been created within the Kubernetes project by community groups, and by cloud native vendors. Ingress NGINX has continued to be one of the most popular, deployed as part of many hosted Kubernetes platforms and within innumerable independent users’ clusters.

History and Challenges

The breadth and flexibility of Ingress NGINX has caused maintenance challenges. Changing expectations about cloud native software have also added complications. What were once considered helpful options have sometimes come to be considered serious security flaws, such as the ability to add arbitrary NGINX configuration directives via the "snippets" annotations. Yesterday’s flexibility has become today’s insurmountable technical debt.

Despite the project’s popularity among users, Ingress NGINX has always struggled with insufficient or barely-sufficient maintainership. For years, the project has had only one or two people doing development work, on their own time, after work hours and on weekends. Last year, the Ingress NGINX maintainers announced their plans to wind down Ingress NGINX and develop a replacement controller together with the Gateway API community. Unfortunately, even that announcement failed to generate additional interest in helping maintain Ingress NGINX or develop InGate to replace it. (InGate development never progressed far enough to create a mature replacement; it will also be retired.)

Current State and Next Steps

Currently, Ingress NGINX is receiving best-effort maintenance. SIG Network and the Security Response Committee have exhausted our efforts to find additional support to make Ingress NGINX sustainable. To prioritize user safety, we must retire the project.

In March 2026, Ingress NGINX maintenance will be halted, and the project will be retired. After that time, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. The GitHub repositories will be made read-only and left available for reference.

Existing deployments of Ingress NGINX will not be broken. Existing project artifacts such as Helm charts and container images will remain available.

In most cases, you can check whether you use Ingress NGINX by running kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx with cluster administrator permissions.

We would like to thank the Ingress NGINX maintainers for their work in creating and maintaining this project–their dedication remains impressive. This Ingress controller has powered billions of requests in datacenters and homelabs all around the world. In a lot of ways, Kubernetes wouldn’t be where it is without Ingress NGINX, and we are grateful for so many years of incredible effort.

SIG Network and the Security Response Committee recommend that all Ingress NGINX users begin migration to Gateway API or another Ingress controller immediately. Many options are listed in the Kubernetes documentation: Gateway API, Ingress. Additional options may be available from vendors you work with.

via Kubernetes Blog https://kubernetes.io/

November 11, 2025 at 01:30PM

·kubernetes.io·
Ingress NGINX Retirement: What You Need to Know
Blog: Ingress NGINX Retirement: What You Need to Know
Blog: Ingress NGINX Retirement: What You Need to Know

Blog: Ingress NGINX Retirement: What You Need to Know

https://www.kubernetes.dev/blog/2025/11/12/ingress-nginx-retirement/

To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of Ingress NGINX. Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.

We recommend migrating to one of the many alternatives. Consider migrating to Gateway API, the modern replacement for Ingress. If you must continue using Ingress, many alternative Ingress controllers are listed in the Kubernetes documentation. Continue reading for further information about the history and current state of Ingress NGINX, as well as next steps.

About Ingress NGINX

Ingress is the original user-friendly way to direct network traffic to workloads running on Kubernetes. (Gateway API is a newer way to achieve many of the same goals.) In order for an Ingress to work in your cluster, there must be an Ingress controller running. There are many Ingress controller choices available, which serve the needs of different users and use cases. Some are cloud-provider specific, while others have more general applicability.

Ingress NGINX was an Ingress controller, developed early in the history of the Kubernetes project as an example implementation of the API. It became very popular due to its tremendous flexibility, breadth of features, and independence from any particular cloud or infrastructure provider. Since those days, many other Ingress controllers have been created within the Kubernetes project by community groups, and by cloud native vendors. Ingress NGINX has continued to be one of the most popular, deployed as part of many hosted Kubernetes platforms and within innumerable independent users’ clusters.

History and Challenges

The breadth and flexibility of Ingress NGINX has caused maintenance challenges. Changing expectations about cloud native software have also added complications. What were once considered helpful options have sometimes come to be considered serious security flaws, such as the ability to add arbitrary NGINX configuration directives via the “snippets” annotations. Yesterday’s flexibility has become today’s insurmountable technical debt.

Despite the project’s popularity among users, Ingress NGINX has always struggled with insufficient or barely-sufficient maintainership. For years, the project has had only one or two people doing development work, on their own time, after work hours and on weekends. Last year, the Ingress NGINX maintainers announced their plans to wind down Ingress NGINX and develop a replacement controller together with the Gateway API community. Unfortunately, even that announcement failed to generate additional interest in helping maintain Ingress NGINX or develop InGate to replace it. (InGate development never progressed far enough to create a mature replacement; it will also be retired.)

Current State and Next Steps

Currently, Ingress NGINX is receiving best-effort maintenance. SIG Network and the Security Response Committee have exhausted our efforts to find additional support to make Ingress NGINX sustainable. To prioritize user safety, we must retire the project.

In March 2026, Ingress NGINX maintenance will be halted, and the project will be retired. After that time, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. The GitHub repositories will be made read-only and left available for reference.

Existing deployments of Ingress NGINX will not be broken. Existing project artifacts such as Helm charts and container images will remain available.

In most cases, you can check whether you use Ingress NGINX by running kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx with cluster administrator permissions.

We would like to thank the Ingress NGINX maintainers for their work in creating and maintaining this project–their dedication remains impressive. This Ingress controller has powered billions of requests in datacenters and homelabs all around the world. In a lot of ways, Kubernetes wouldn’t be where it is without Ingress NGINX, and we are grateful for so many years of incredible effort.

SIG Network and the Security Response Committee recommend that all Ingress NGINX users begin migration to Gateway API or another Ingress controller immediately. Many options are listed in the Kubernetes documentation: Gateway API, Ingress. Additional options may be available from vendors you work with.

via Kubernetes Contributors – Contributor Blog https://www.kubernetes.dev/blog/

November 12, 2025 at 12:00PM

·kubernetes.dev·
Blog: Ingress NGINX Retirement: What You Need to Know
Building Kubernetes (a lite version) from scratch in Go with Owumi Festus
Building Kubernetes (a lite version) from scratch in Go with Owumi Festus

Building Kubernetes (a lite version) from scratch in Go, with Owumi Festus

https://ku.bz/pf5kK9lQF

Festus Owumi walks through his project of building a lightweight version of Kubernetes in Go. He removed etcd (replacing it with in-memory storage), skipped containers entirely, dropped authentication, and focused purely on the control plane mechanics. Through this process, he demonstrates how the reconciliation loop, API server concurrency handling, and scheduling logic actually work at their most basic level.

You will learn:

How the reconciliation loop works - The core concept of desired state vs current state that drives all Kubernetes operations

Why the API server is the gateway to etcd - How Kubernetes prevents race conditions using optimistic concurrency control and why centralized validation matters

What the scheduler actually does - Beyond simple round-robin assignment, understanding node affinity, resource requirements, and the complex scoring algorithms that determine pod placement

The complete pod lifecycle - Step-by-step walkthrough from kubectl command to running pod, showing how independent components work together like an orchestra

Sponsor

This episode is sponsored by StormForge by CloudBolt — automatically rightsize your Kubernetes workloads with ML-powered optimization

More info

Find all the links and info for this episode here: https://ku.bz/pf5kK9lQF

Interested in sponsoring an episode? Learn more.

via KubeFM https://kube.fm

November 11, 2025 at 05:00AM

·kube.fm·
Building Kubernetes (a lite version) from scratch in Go with Owumi Festus
DevOps & AI Toolkit - AI Agent Architecture Explained: LLMs Context & Tool Execution - https://www.youtube.com/watch?v=bfBOn2Ahj4c
DevOps & AI Toolkit - AI Agent Architecture Explained: LLMs Context & Tool Execution - https://www.youtube.com/watch?v=bfBOn2Ahj4c

AI Agent Architecture Explained: LLMs, Context & Tool Execution

You type "Create a PostgreSQL database in AWS" into Claude Code or Cursor, and it just works. But how? Most people think the AI does everything, but that's wrong. The AI can't touch your files or run commands on its own. This video breaks down the real architecture behind AI coding agents, explaining the three key players that make it all work: you (providing intent), the agent (the orchestrator), and the LLM (the reasoning brain). Understanding this matters if you're using these tools every day.

We'll walk through increasingly sophisticated architectures, from basic system prompts to the complete agent loop that enables real work. You'll learn how tools get executed, what context really means, how the agent manages the loop between you and the LLM, and why the LLM is stateless. We'll also cover practical considerations like MCP (Model Context Protocol) for integrating external tools and context limits that affect performance. By the end, you'll understand that the agent is actually the only "dumb" actor in the system—it's pure execution with no intelligence. The LLM provides the brains, you provide the intent, and the agent coordinates everything to make it happen.

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: RavenDB 🔗 https://ravendb.net ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

AIAgents #LLM #HowAIWorks

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/ai/ai-agent-architecture-explained-llms,-context--tool-execution

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 AI Agents Explained 01:02 RavenDB (sponsor) 02:16 How Do Agents Work? 05:42 How AI Agent Loops Work? 09:36 MCP (Model Context Protocol) & Context Limits 11:38 AI Agents Explained: Key Takeaways

via YouTube https://www.youtube.com/watch?v=bfBOn2Ahj4c

·youtube.com·
DevOps & AI Toolkit - AI Agent Architecture Explained: LLMs Context & Tool Execution - https://www.youtube.com/watch?v=bfBOn2Ahj4c
Announcing the 2025 Steering Committee Election Results
Announcing the 2025 Steering Committee Election Results

Announcing the 2025 Steering Committee Election Results

https://kubernetes.io/blog/2025/11/09/steering-committee-results-2025/

The 2025 Steering Committee Election is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2025. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.

The Steering Committee oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their charter.

Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.

Results

Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):

Kat Cosgrove (@katcosgrove), Minimus

Paco Xu (@pacoxu), DaoCloud

Rita Zhang (@ritazh), Microsoft

Maciej Szulik (@soltysh), Defense Unicorns

They join continuing members:

Antonio Ojea (@aojea), Google

Benjamin Elder (@BenTheElder), Google

Sascha Grunert (@saschagrunert), Red Hat

Maciej Szulik and Paco Xu are returning Steering Committee Members.

Big thanks!

Thank you and congratulations on a successful election to this round’s election officers:

Christoph Blecker (@cblecker)

Nina Polshakova (@npolshakova)

Sreeram Venkitesh (@sreeram-venkitesh)

Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:

Stephen Augustus (@justaugustus), Bloomberg

Patrick Ohly (@pohly), Intel

And thank you to all the candidates who came forward to run for election.

Get involved with the Steering Committee

This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee meeting notes and weigh in by filing an issue or creating a PR against their repo. They have an open meeting on the first Monday at 8am PT of every month. They can also be contacted at their public mailing list steering@kubernetes.io.

You can see what the Steering Committee meetings are all about by watching past meetings on the YouTube Playlist.

This post was adapted from one written by the Contributor Comms Subproject. If you want to write stories about the Kubernetes community, learn more about us.

via Kubernetes Blog https://kubernetes.io/

November 09, 2025 at 03:10PM

·kubernetes.io·
Announcing the 2025 Steering Committee Election Results
Blog: Announcing the 2025 Steering Committee Election Results
Blog: Announcing the 2025 Steering Committee Election Results

Blog: Announcing the 2025 Steering Committee Election Results

https://www.kubernetes.dev/blog/2025/11/09/steering-committee-results-2025/

The 2025 Steering Committee Election is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2025. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.

The Steering Committee oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their charter.

Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.

Results

Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):

Kat Cosgrove (@katcosgrove), Minimus

Paco Xu (@pacoxu), DaoCloud

Rita Zhang (@ritazh), Microsoft

Maciej Szulik (@soltysh), Defense Unicorns

They join continuing members:

Antonio Ojea (@aojea), Google

Benjamin Elder (@BenTheElder), Google

Sascha Grunert (@saschagrunert), Red Hat

Maciej Szulik and Paco Xu are returning Steering Committee Members.

Big thanks!

Thank you and congratulations on a successful election to this round’s election officers:

Christoph Blecker (@cblecker)

Nina Polshakova (@npolshakova)

Sreeram Venkitesh (@sreeram-venkitesh)

Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:

Stephen Augustus (@justaugustus), Bloomberg

Patrick Ohly (@pohly), Intel

And thank you to all the candidates who came forward to run for election.

Get involved with the Steering Committee

This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee meeting notes and weigh in by filing an issue or creating a PR against their repo. They have an open meeting on the first Monday at 8am PT of every month. They can also be contacted at their public mailing list steering@kubernetes.io.

You can see what the Steering Committee meetings are all about by watching past meetings on the YouTube Playlist.

This post was adapted from one written by the Contributor Comms Subproject. If you want to write stories about the Kubernetes community, learn more about us.

via Kubernetes Contributors – Contributor Blog https://www.kubernetes.dev/blog/

November 09, 2025 at 03:10PM

·kubernetes.dev·
Blog: Announcing the 2025 Steering Committee Election Results
Bootcrew
Bootcrew
Experimental Bootc Images. Bootcrew has 10 repositories available. Follow their code on GitHub.
·github.com·
Bootcrew
Docker-Sponsored Open Source Program
Docker-Sponsored Open Source Program
The Docker-Sponsored Open Source Program offers non-commercial, open source developers the tools they need to push their projects forward. Apply today!
·docker.com·
Docker-Sponsored Open Source Program
Gateway API 1.4: New Features
Gateway API 1.4: New Features

Gateway API 1.4: New Features

https://kubernetes.io/blog/2025/11/06/gateway-api-v1-4/

Ready to rock your Kubernetes networking? The Kubernetes SIG Network community presented the General Availability (GA) release of Gateway API (v1.4.0)! Released on October 6, 2025, version 1.4.0 reinforces the path for modern, expressive, and extensible service networking in Kubernetes.

Gateway API v1.4.0 brings three new features to the Standard channel (Gateway API's GA release channel):

BackendTLSPolicy for TLS between gateways and backends

supportedFeatures in GatewayClass status

Named rules for Routes

and introduces three new experimental features:

Mesh resource for service mesh configuration

Default gateways to ease configuration burden

externalAuth filter for HTTPRoute

Graduations to Standard Channel

Backend TLS policy

Leads: Candace Holman, Norwin Schnyder, Katarzyna Łach

GEP-1897: BackendTLSPolicy

BackendTLSPolicy is a new Gateway API type for specifying the TLS configuration of the connection from the Gateway to backend pod(s). . Prior to the introduction of BackendTLSPolicy, there was no API specification that allowed encrypted traffic on the hop from Gateway to backend.

The BackendTLSPolicy validation configuration requires a hostname. This hostname serves two purposes. It is used as the SNI header when connecting to the backend and for authentication, the certificate presented by the backend must match this hostname, unless subjectAltNames is explicitly specified.

If subjectAltNames (SANs) are specified, the hostname is only used for SNI, and authentication is performed against the SANs instead. If you still need to authenticate against the hostname value in this case, you MUST add it to the subjectAltNames list.

BackendTLSPolicy validation configuration also requires either caCertificateRefs or wellKnownCACertificates. caCertificateRefs refer to one or more (up to 8) PEM-encoded TLS certificate bundles. If there are no specific certificates to use, then depending on your implementation, you may use wellKnownCACertificates, set to "System" to tell the Gateway to use an implementation-specific set of trusted CA Certificates.

In this example, the BackendTLSPolicy is configured to use certificates defined in the auth-cert ConfigMap to connect with a TLS-encrypted upstream connection where pods backing the auth service are expected to serve a valid certificate for auth.example.com. It uses subjectAltNames with a Hostname type, but you may also use a URI type.

apiVersion: gateway.networking.k8s.io/v1 kind: BackendTLSPolicy metadata: name: tls-upstream-auth spec: targetRefs:

  • kind: Service name: auth group: "" sectionName: "https" validation: caCertificateRefs:
  • group: "" # core API group kind: ConfigMap name: auth-cert subjectAltNames:
  • type: "Hostname" hostname: "auth.example.com"

In this example, the BackendTLSPolicy is configured to use system certificates to connect with a TLS-encrypted backend connection where Pods backing the dev Service are expected to serve a valid certificate for dev.example.com.

apiVersion: gateway.networking.k8s.io/v1 kind: BackendTLSPolicy metadata: name: tls-upstream-dev spec: targetRefs:

  • kind: Service name: dev group: "" sectionName: "btls" validation: wellKnownCACertificates: "System" hostname: dev.example.com

More information on the configuration of TLS in Gateway API can be found in Gateway API - TLS Configuration.

Status information about the features that an implementation supports

Leads: Lior Lieberman, Beka Modebadze

GEP-2162: Supported features in GatewayClass Status

GatewayClass status has a new field, supportedFeatures. This addition allows implementations to declare the set of features they support. This provides a clear way for users and tools to understand the capabilities of a given GatewayClass.

This feature's name for conformance tests (and GatewayClass status reporting) is SupportedFeatures. Implementations must populate the supportedFeatures field in the .status of the GatewayClass before the GatewayClass is accepted, or in the same operation.

Here’s an example of a supportedFeatures published under GatewayClass' .status:

apiVersion: gateway.networking.k8s.io/v1 kind: GatewayClass ... status: conditions:

  • lastTransitionTime: "2022-11-16T10:33:06Z" message: Handled by Foo controller observedGeneration: 1 reason: Accepted status: "True" type: Accepted supportedFeatures:
  • HTTPRoute
  • HTTPRouteHostRewrite
  • HTTPRoutePortRedirect
  • HTTPRouteQueryParamMatching

Graduation of SupportedFeatures to Standard, helped improve the conformance testing process for Gateway API. The conformance test suite will now automatically run tests based on the features populated in the GatewayClass' status. This creates a strong, verifiable link between an implementation's declared capabilities and the test results, making it easier for implementers to run the correct conformance tests and for users to trust the conformance reports.

This means when the SupportedFeatures field is populated in the GatewayClass status there will be no need for additional conformance tests flags like –suported-features, or –exempt or –all-features. It's important to note that Mesh features are an exception to this and can be tested for conformance by using Conformance Profiles, or by manually providing any combination of features related flags until the dedicated resource graduates from the experimental channel.

Named rules for Routes

GEP-995: Adding a new name field to all xRouteRule types (HTTPRouteRule, GRPCRouteRule, etc.)

Leads: Guilherme Cassolato

This enhancement enables route rules to be explicitly identified and referenced across the Gateway API ecosystem. Some of the key use cases include:

Status: Allowing status conditions to reference specific rules directly by name.

Observability: Making it easier to identify individual rules in logs, traces, and metrics.

Policies: Enabling policies (GEP-713) to target specific route rules via the sectionName field in their targetRef[s].

Tooling: Simplifying filtering and referencing of route rules in tools such as gwctl, kubectl, and general-purpose utilities like jq and yq.

Internal configuration mapping: Facilitating the generation of internal configurations that reference route rules by name within gateway and mesh implementations.

This follows the same well-established pattern already adopted for Gateway listeners, Service ports, Pods (and containers), and many other Kubernetes resources.

While the new name field is optional (so existing resources remain valid), its use is strongly encouraged. Implementations are not expected to assign a default value, but they may enforce constraints such as immutability.

Finally, keep in mind that the name format is validated, and other fields (such as sectionName) may impose additional, indirect constraints.

Experimental channel changes

Enabling external Auth for HTTPRoute

Giving Gateway API the ability to enforce authentication and maybe authorization as well at the Gateway or HTTPRoute level has been a highly requested feature for a long time. (See the GEP-1494 issue for some background.)

This Gateway API release adds an Experimental filter in HTTPRoute that tells the Gateway API implementation to call out to an external service to authenticate (and, optionally, authorize) requests.

This filter is based on the Envoy ext_authz API, and allows talking to an Auth service that uses either gRPC or HTTP for its protocol.

Both methods allow the configuration of what headers to forward to the Auth service, with the HTTP protocol allowing some extra information like a prefix path.

A HTTP example might look like this (noting that this example requires the Experimental channel to be installed and an implementation that supports External Auth to actually understand the config):

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: require-auth namespace: default spec: parentRefs:

  • name: your-gateway-here rules:
  • matches:
  • path: type: Prefix value: /admin filters:
  • type: ExternalAuth externalAuth: protocol: HTTP backendRef: name: auth-service http: # These headers are always sent for the HTTP protocol, # but are included here for illustrative purposes allowedHeaders:
  • Host
  • Method
  • Path
  • Content-Length
  • Authorization backendRefs:
  • name: admin-backend port: 8080

This allows the backend Auth service to use the supplied headers to make a determination about the authentication for the request.

When a request is allowed, the external Auth service will respond with a 200 HTTP response code, and optionally extra headers to be included in the request that is forwarded to the backend. When the request is denied, the Auth service will respond with a 403 HTTP response.

Since the Authorization header is used in many authentication methods, this method can be used to do Basic, Oauth, JWT, and other common authentication and authorization methods.

Mesh resource

Lead(s): Flynn

GEP-3949: Mesh-wide configuration and supported features

Gateway API v1.4.0 introduces a new experimental Mesh resource, which provides a way to configure mesh-wide settings and discover the features supported by a given mesh implementation. This resource is analogous to the Gateway resource and will initially be mainly used for conformance testing, with plans to extend its use to off-cluster Gateways in the future.

The Mesh resource is cluster-scoped and, as an experimental feature, is named XMesh and resides in the gateway.networking.x-k8s.io API group. A key field is controllerName, which specifies the mesh implementation responsible for the resource. The resource's status stanza indicates whether the mesh implementation has accepted it and lists the features the mesh supports.

One of the goals of this GEP is to avoid making it more difficult for users to adopt a mesh. To simplify adoption, mesh implementations are expect

·kubernetes.io·
Gateway API 1.4: New Features
Last Week in Kubernetes Development - Week Ending November 02 2025
Last Week in Kubernetes Development - Week Ending November 02 2025

Week Ending November 02, 2025

https://lwkd.info/2025/20251106

Developer News

The 2025 Steering Committee Election results are announced. Congratulations to Kat Cosgrove, Paco Xu, Rita Zhang and Maciej Szulik for being elected for their 2 year term in the steering committee. Maciej and Paco are returning steering committee members. Thank you to all the candidates and all the community members for voting. An election retro call is happening on 19th November, 8AM PT. If you have any feedback about the steering elections this year, please add it to the retro doc.

WG-LTS is winding down after the conclusion from all the discussions that a community supported LTS within the Kubernetes project is probably not the right answer. The Compatibility Versions feature is cited as an alternative for safer upgrades.

The Kustomize project is seeking proposals for a new logo. If you have any ideas for a new logo, do post it in the open issue!

Release Schedule

Next Deadline: Code Freeze, 7th November

The code freeze and test freeze deadline is on Friday 7th November 2025, 12:00 UTC. Please open an early exception for your KEP if you think you need more time!

Kubernetes v1.35.0-alpha.3 is live.

Featured PRs

[KEP-4330] add min-compatibility-version to control plane

This PR is part of a larger effort to introduce “compatibility versions” to control plane components and features, eventually permitting upgrades and rollbacks that span more than one Kubernetes version safely. This PR adds the field to apiserver, controller-manager, and scheduler.

KEP of the Week

KEP-4827: Component Statusz

As part of Kubernetes march towards structured data for everything, this KEP introduces a structured, standardized endpoint for health and status checking. It will enhance observability and enable building new monitoring and performance tools. Statusz is kicking off with v1alpha1 in 1.35

Other Merges

New k8s-resource-fully-qualified-name format for Declarative Validation

Enhance several different E2E tests (plus many more, kudos Lukasz Szaszkiewicz) to support EnableWatchListClient

CRD Conditions include an ObservedGeneration to deter race conditions

DRA APIs: migrate several [DRA validations] (https://github.com/kubernetes/kubernetes/pull/134963), use EachKey to map resources, make DeviceAttribute a Union type,

New tests to support Deployments terminating pods during Recreate and RollingUpdate

Benchmarking Shared Informers now

Allow some kubeadm functions to be exported

Support Declarative Validation for StorageClass

Use informer.RunWithContext in controller tests

Test stepwise volume expansion

Prevent AllocationMode: All failure

Allow DRA to process inactive workloads with Allocatable=0

ContextualLogging migrations: cpumanager

JWKS fetch metrics for structured authentication

Pod Generation E2E tests promoted to conformance

Promotions

KUBECTL_COMMAND_HEADERS to GA

InPlacePodVerticalScaling to GA

StorageVersionMigration to beta

SystemWatchdog to GA

MutableCSINodeAllocatableCount to beta

DeploymentReplicaSetTerminatingReplicas to beta

Deprecated

BlockOwnerDeletion is removed from resource claims

Stop providing taint keys in Pod statuses when scheduling fails

DynamicResourceAllocation feature gate locked on; will be removed in a few releases

Remove kubelet --pod-infra-container-image switch

Subprojects and Dependency Updates

containerd v2.2.0-rc.0 (pre-release) introduces a mount manager, adds conf.d include support in the default configuration, and supports back references in the garbage collector. It improves CRI with ListPodSandboxMetrics and image-volume subpaths, adds parallel image unpack and referrers fetcher, updates the EROFS snapshotter, enables OpenTelemetry traces and WASM plugin support in NRI, improves shim reload performance, and postpones v2.2 deprecations to v2.3.

nerdctl v2.2.0-rc.0 fixes a namestore directory regression, adds mount-manager support, and introduces new checkpoint commands (create, ls, rm). It adds a --estargz-gzip-helper flag for image conversion and updates bundled dependencies, including containerd v2.2.0-rc.0, runc v1.3.2, BuildKit v0.25.1, and Stargz Snapshotter v0.18.0.

cloud-provider-vsphere v1.33.1 updates CAPI to v1.10.1 and CAPV to v1.13.0, enables weekly security checks, updates API calls to use FQDN, and fixes Service deletion when VirtualMachineService is not found. It also bumps Kubernetes to v1.33.5 and refreshes documentation.

cloud-provider-vsphere v1.32.3 provides dependency updates across test suites, upgrades controller-runtime to v0.19.6 and govmomi to v0.46.3, introduces weekly security checks, and fixes VirtualMachineService deletion. It also adopts FQDN for Supervisor API calls and includes CVE patches.

vsphere-cpi-chart-1.33.1 and vsphere-cpi-chart-1.32.3 update Helm charts for vSphere CPI to align with recent vSphere provider releases.

ingress-nginx helm-chart-4.14.0, 4.13.4, and 4.12.8 deliver updated Helm charts for the NGINX ingress controller with alignment to current controller and Kubernetes versions.

cluster-autoscaler v1.30.7 backports the OCI CloudProvider feature to the v1.30 line and publishes multi-architecture images (v1.30.7).

prometheus v3.7.3 fixes a UI redirect regression involving -web.external-url and -web.route-prefix, resolves federation issues for some native histograms, corrects promtool check config failures when --lint=none is specified, and eliminates a remote-write queue resharding deadlock.

Shoutouts

Sreeram Venkitesh - Shoutout to everyone who helped run the 2025 steering committee elections smoothly - The EOs and alternate EOs: @cblecker @Nina Polshakova @Arujjwal @Rey Lejano, K8s infra liaison @mahamed, and @jberkus for all the support from the very beginning and also for helping us with Elekto. Also big thanks to all the previous (and continuing) steering committee members for their support in making the election a smooth and successful one. Thank you all!

via Last Week in Kubernetes Development https://lwkd.info/

November 06, 2025 at 02:37AM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending November 02 2025
Graphs in your head or how to assess a Kubernetes workload with Oleksii Kolodiazhnyi
Graphs in your head or how to assess a Kubernetes workload with Oleksii Kolodiazhnyi

Graphs in your head, or how to assess a Kubernetes workload, with Oleksii Kolodiazhnyi

https://ku.bz/zDThxGQsP

Understanding what's actually happening inside a complex Kubernetes system is one of the biggest challenges architects face.

Oleksii Kolodiazhnyi, Senior Architect at Mirantis, shares his structured approach to Kubernetes workload assessment. He breaks down how to move from high-level business understanding to detailed technical analysis, using visualization tools and systematic documentation.

You will learn:

A top-down assessment methodology that starts with business cases and use cases before diving into technical details

Practical visualization techniques using tools like KubeView, K9s, and Helm dashboard to quickly understand resource interactions

Systematic resource discovery approaches for different scenarios, from well-documented Helm-based deployments to legacy applications with hard-coded configurations buried in containers

Documentation strategies for creating consumable artifacts that serve different audiences, from business stakeholders to new team members joining the project

Sponsor

This episode is sponsored by StormForge by CloudBolt — automatically rightsize your Kubernetes workloads with ML-powered optimization

More info

Find all the links and info for this episode here: https://ku.bz/zDThxGQsP

Interested in sponsoring an episode? Learn more.

via KubeFM https://kube.fm

November 04, 2025 at 05:00AM

·kube.fm·
Graphs in your head or how to assess a Kubernetes workload with Oleksii Kolodiazhnyi
DevOps & AI Toolkit - Best AI Models for DevOps & SRE: Real-World Agent Testing - https://www.youtube.com/watch?v=r84kQ5IMIQM
DevOps & AI Toolkit - Best AI Models for DevOps & SRE: Real-World Agent Testing - https://www.youtube.com/watch?v=r84kQ5IMIQM

Best AI Models for DevOps & SRE: Real-World Agent Testing

A comprehensive, data-driven comparison of 10 leading large language models (LLMs) from Google, Anthropic, OpenAI, xAI, DeepSeek, and Mistral, specifically tested for DevOps, SRE, and platform engineering workflows. Instead of relying on traditional benchmarks or marketing claims, this evaluation runs real agent workflows through production scenarios: Kubernetes operations, cluster analysis, policy generation, manifest creation, and systematic troubleshooting—all with actual timeout constraints. The results reveal shocking gaps between benchmark promises and production reality: 70% of models couldn't complete tasks in reasonable timeframes, premium "reasoning" models failed on tasks cheaper alternatives handled easily, and the most expensive model ($120 per million output tokens) failed more tests than it passed.

The evaluation measures five key dimensions: overall performance quality, reliability and completion rates, consistency across different tasks, cost-performance value, and context window efficiency. Five distinct test scenarios push models through endurance tests (100+ consecutive interactions), rapid pattern recognition (5-minute workflows), comprehensive policy compliance analysis, extreme context pressure (100,000+ token loads), and systematic investigation loops requiring intelligent troubleshooting. The rankings reveal clear performance tiers, with Claude Haiku emerging as the overall winner for its exceptional efficiency and price-performance ratio, while Claude Sonnet takes the reliability crown with 98% completion rates. The video provides specific recommendations on which models to use, which to avoid, and why cost doesn't always correlate with capability in production environments.

LLMComparison #DevOps #AIforEngineers

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/ai/best-ai-models-for-devops--sre-real-world-agent-testing 🔗 DevOps AI Toolkit: https://github.com/vfarcic/dot-ai 🎬 Analysis report: https://github.com/vfarcic/dot-ai/blob/main/eval/analysis/platform/synthesis-report.md

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Large Language Models (LLMs) Compared 01:54 How I Compare Large Language Models 05:01 LLM Evaluation Criteria and Test Scenarios 13:23 AI Model Benchmark Results 27:34 AI Model Rankings and Recommendations

via YouTube https://www.youtube.com/watch?v=r84kQ5IMIQM

·youtube.com·
DevOps & AI Toolkit - Best AI Models for DevOps & SRE: Real-World Agent Testing - https://www.youtube.com/watch?v=r84kQ5IMIQM
Adult recess club brings wellness, play to Metro Detroit
Adult recess club brings wellness, play to Metro Detroit
The “Recess Club,” organized by wellness advocate Melissa Coulier, offers twice-monthly gatherings designed to help adults disconnect from technology and reconnect with both nature and each other.
·clickondetroit.com·
Adult recess club brings wellness, play to Metro Detroit