
Suggested Reads
NFTables mode for kube-proxy
https://kubernetes.io/blog/2025/02/28/nftables-kube-proxy/
A new nftables mode for kube-proxy was introduced as an alpha feature in Kubernetes 1.29. Currently in beta, it is expected to be GA as of 1.33. The new mode fixes long-standing performance problems with the iptables mode and all users running on systems with reasonably-recent kernels are encouraged to try it out. (For compatibility reasons, even once nftables becomes GA, iptables will still be the default.)
Why nftables? Part 1: data plane latency
The iptables API was designed for implementing simple firewalls, and has problems scaling up to support Service proxying in a large Kubernetes cluster with tens of thousands of Services.
In general, the ruleset generated by kube-proxy in iptables mode has a number of iptables rules proportional to the sum of the number of Services and the total number of endpoints. In particular, at the top level of the ruleset, there is one rule to test each possible Service IP (and port) that a packet might be addressed to:
If the packet is addressed to 172.30.0.41:80, then jump to the chain
KUBE-SVC-XPGD46QRK7WJZT7O for further processing
-A KUBE-SERVICES -m comment --comment "namespace1/service1:p80 cluster IP" -m tcp -p tcp -d 172.30.0.41 --dport 80 -j KUBE-SVC-XPGD46QRK7WJZT7O
If the packet is addressed to 172.30.0.42:443, then...
-A KUBE-SERVICES -m comment --comment "namespace2/service2:p443 cluster IP" -m tcp -p tcp -d 172.30.0.42 --dport 443 -j KUBE-SVC-GNZBNJ2PO5MGZ6GT
etc...
-A KUBE-SERVICES -m comment --comment "namespace3/service3:p80 cluster IP" -m tcp -p tcp -d 172.30.0.43 --dport 80 -j KUBE-SVC-X27LE4BHSL4DOUIK
This means that when a packet comes in, the time it takes the kernel to check it against all of the Service rules is O(n) in the number of Services. As the number of Services increases, both the average and the worst-case latency for the first packet of a new connection increases (with the difference between best-case, average, and worst-case being mostly determined by whether a given Service IP address appears earlier or later in the KUBE-SERVICES chain).
By contrast, with nftables, the normal way to write a ruleset like this is to have a single rule, using a "verdict map" to do the dispatch:
table ip kube-proxy {
The service-ips verdict map indicates the action to take for each matching packet.
map service-ips { type ipv4_addr . inet_proto . inet_service : verdict comment "ClusterIP, ExternalIP and LoadBalancer IP traffic" elements = { 172.30.0.41 . tcp . 80 : goto service-ULMVA6XW-namespace1/service1/tcp/p80, 172.30.0.42 . tcp . 443 : goto service-42NFTM6N-namespace2/service2/tcp/p443, 172.30.0.43 . tcp . 80 : goto service-4AT6LBPK-namespace3/service3/tcp/p80, ... } }
Now we just need a single rule to process all packets matching an
element in the map. (This rule says, "construct a tuple from the
destination IP address, layer 4 protocol, and destination port; look
that tuple up in "service-ips"; and if there's a match, execute the
associated verdict.)
chain services { ip daddr . meta l4proto . th dport vmap @service-ips } ... }
Since there's only a single rule, with a roughly O(1) map lookup, packet processing time is more or less constant regardless of cluster size, and the best/average/worst cases are very similar:
But note the huge difference in the vertical scale between the iptables and nftables graphs! In the clusters with 5000 and 10,000 Services, the p50 (average) latency for nftables is about the same as the p01 (approximately best-case) latency for iptables. In the 30,000 Service cluster, the p99 (approximately worst-case) latency for nftables manages to beat out the p01 latency for iptables by a few microseconds! Here's both sets of data together, but you may have to squint to see the nftables results!:
Why nftables? Part 2: control plane latency
While the improvements to data plane latency in large clusters are great, there's another problem with iptables kube-proxy that often keeps users from even being able to grow their clusters to that size: the time it takes kube-proxy to program new iptables rules when Services and their endpoints change.
With both iptables and nftables, the total size of the ruleset as a whole (actual rules, plus associated data) is O(n) in the combined number of Services and their endpoints. Originally, the iptables backend would rewrite every rule on every update, and with tens of thousands of Services, this could grow to be hundreds of thousands of iptables rules. Starting in Kubernetes 1.26, we began improving kube-proxy so that it could skip updating most of the unchanged rules in each update, but the limitations of iptables-restore as an API meant that it was still always necessary to send an update that's O(n) in the number of Services (though with a noticeably smaller constant than it used to be). Even with those optimizations, it can still be necessary to make use of kube-proxy's minSyncPeriod config option to ensure that it doesn't spend every waking second trying to push iptables updates.
The nftables APIs allow for doing much more incremental updates, and when kube-proxy in nftables mode does an update, the size of the update is only O(n) in the number of Services and endpoints that have changed since the last sync, regardless of the total number of Services and endpoints. The fact that the nftables API allows each nftables-using component to have its own private table also means that there is no global lock contention between components like with iptables. As a result, kube-proxy's nftables updates can be done much more efficiently than with iptables.
(Unfortunately I don't have cool graphs for this part.)
Why not nftables?
All that said, there are a few reasons why you might not want to jump right into using the nftables backend for now.
First, the code is still fairly new. While it has plenty of unit tests, performs correctly in our CI system, and has now been used in the real world by multiple users, it has not seen anything close to as much real-world usage as the iptables backend has, so we can't promise that it is as stable and bug-free.
Second, the nftables mode will not work on older Linux distributions; currently it requires a 5.13 or newer kernel. Additionally, because of bugs in early versions of the nft command line tool, you should not run kube-proxy in nftables mode on nodes that have an old (earlier than 1.0.0) version of nft in the host filesystem (or else kube-proxy's use of nftables may interfere with other uses of nftables on the system).
Third, you may have other networking components in your cluster, such as the pod network or NetworkPolicy implementation, that do not yet support kube-proxy in nftables mode. You should consult the documentation (or forums, bug tracker, etc.) for any such components to see if they have problems with nftables mode. (In many cases they will not; as long as they don't try to directly interact with or override kube-proxy's iptables rules, they shouldn't care whether kube-proxy is using iptables or nftables.) Additionally, observability and monitoring tools that have not been updated may report less data for kube-proxy in nftables mode than they do for kube-proxy in iptables mode.
Finally, kube-proxy in nftables mode is intentionally not 100% compatible with kube-proxy in iptables mode. There are a few old kube-proxy features whose default behaviors are less secure, less performant, or less intuitive than we'd like, but where we felt that changing the default would be a compatibility break. Since the nftables mode is opt-in, this gave us a chance to fix those bad defaults without breaking users who weren't expecting changes. (In particular, with nftables mode, NodePort Services are now only reachable on their nodes' default IPs, as opposed to being reachable on all IPs, including 127.0.0.1, with iptables mode.) The kube-proxy documentation has more information about this, including information about metrics you can look at to determine if you are relying on any of the changed functionality, and what configuration options are available to get more backward-compatible behavior.
Trying out nftables mode
Ready to try it out? In Kubernetes 1.31 and later, you just need to pass --proxy-mode nftables to kube-proxy (or set mode: nftables in your kube-proxy config file).
If you are using kubeadm to set up your cluster, the kubeadm documentation explains how to pass a KubeProxyConfiguration to kubeadm init. You can also deploy nftables-based clusters with kind.
You can also convert existing clusters from iptables (or ipvs) mode to nftables by updating the kube-proxy configuration and restarting the kube-proxy pods. (You do not need to reboot the nodes: when restarting in nftables mode, kube-proxy will delete any existing iptables or ipvs rules, and likewise, if you later revert back to iptables or ipvs mode, it will delete any existing nftables rules.)
Future plans
As mentioned above, while nftables is now the best kube-proxy mode, it is not the default, and we do not yet have a plan for changing that. We will continue to support the iptables mode for a long time.
The future of the IPVS mode of kube-proxy is less certain: its main advantage over iptables was that it was faster, but certain aspects of the IPVS architecture and APIs were awkward for kube-proxy's purposes (for example, the fact that the kube-ipvs0 device needs to have every Service IP address assigned to it), and some parts of Kubernetes Service proxying semantics were difficult to implement using IPVS (particularly the fact that some Services had to have different endpoints depending on whether you connected to them from a local or remote client). And now, the nftables mode has the same performance as IPVS mode (actually, slightly better), without any of the downsides:
(In theory the IPVS mode also has the advantage of being able to use various other IPVS functionality
Week Ending February 23, 2025
https://lwkd.info/2025/20250227
Developer News
Unconference proposals are open for the Maintainer Summit EU. Also, remember to register.
The SIG Meet & Greet at KubeCon EU will on April 3, 12:15pm to 2:15pm BST at the Project Pavilion. Sign up if your SIG will have representation.
Maciek Pytel is stepping down from SIG-Autoscaling chair, and has proposed Kuba Tużnik to replace him.
Release Schedule
Next Deadline: Placeholder PRs for Docs, Feb 27
Yes, this means you should be starting your documentation process for those opt-in features. Also, final call for Enhancement Exceptions is March 3.
KEP of the Week
KEP 4633: Only allow anonymous auth for configured endpoints
This KEP proposes allowing anonymous authentication only for specified endpoints while disabling it elsewhere. Kubernetes permits anonymous requests by default, but fully disabling them (--anonymous-auth=false) can break unauthenticated health checks (healthz, livez, readyz). Misconfigurations, like binding system:anonymous to powerful roles, pose security risks. This proposal enhances security by minimizing misconfigurations while preserving essential functionality.
Other Merges
Watch added to controller roles that include List but do not include Watch
Move GetCurrentResourceVersion to storage.Interface
Rename CacheProxy to CacheDelegator
Fix to allow ImageVolume for Restricted PSA profiles
E2E tests for Pod exec to use websockets instead of SPDY
Cleanup for failing tests
Fix for in-place Pod resize E2E tests after forbidding memory limit decrease
Remove Flagz feature-gate check before populating serverRunOptions.Flagz
Framework util function GetPodList to return errors for upstream handling
test apiserver to use default API groups ensuring tests are realistic as possible
Fix SelfSubjectReview test to decouple beta and GA types
DRA added dedicated integration tests
backoffQ in scheduler split into backoffQ and errorBackoffQ
Fix for sweep and fix stat, lstat, evalsymlink usage for go1.23 on Windows
Metadata management for Pods updated to populate .metadata.generation on writes
CPU footprint of node cpumanager cfs quota testcases reduced to avoid false negatives reds on CI
Controllers that write out IP address or CIDR values to API objects to ensure that they always write values in canonical form
Fix for the ResourceQuota admission plugin not respecting any scope changes during updates
reflect.DeepEqual replaced with cmp.Diff in pkg/scheduler tests
queueinghint added for volumeattachment deletion
Fixed an issue in register-gen where imports were missing
Canonicalization of NetworkDeviceData IPs now required
Promotions
AnyVolumeDataSource to GA
Version Updates
Latest etcd image v3.6.0-rc.1 bumped
Subprojects and Dependency Updates
Python client v32.0.1: server side apply, decimal to quantity conversion, cluster info
via Last Week in Kubernetes Development https://lwkd.info/
February 27, 2025 at 11:40AM
Ep12 - Ask Me Anything About DevOps, Cloud, Kubernetes, Platform Engineering,... w/Scott Rosenberg
There are no restrictions in this AMA session. You can ask anything about DevOps, Cloud, Kubernetes, Platform Engineering, containers, or anything else. We'll have a special guest Scott Rosenberg to help us out.
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=_3GAGoiLaoM
Learned it the hard way: don't use Cilium's default Pod CIDR, with Isala Piyarisi
This episode examines how a default configuration in Cilium CNI led to silent packet drops in production after 8 months of stable operations.
Isala Piyarisi, Senior Software Engineer at WSO2, shares how his team discovered that Cilium's default Pod CIDR (10.0.0.0/8) was conflicting with their Azure Firewall subnet assignments, causing traffic disruptions in their staging environment.
You will learn:
How Cilium's default CIDR allocation can create routing conflicts with existing infrastructure
A methodical process for debugging network issues using packet tracing, routing table analysis, and firewall logs
The procedure for safely changing Pod CIDR ranges in production clusters
Sponsor
This episode is sponsored by Learnk8s — get started on your Kubernetes journey through comprehensive online, in-person or remote training.
More info
Find all the links and info for this episode here: https://ku.bz/kJjXQlmTw
Interested in sponsoring an episode? Learn more.
via KubeFM https://kube.fm
February 25, 2025 at 07:00AM
Past, Present, and Future of Internal Developer Platforms (IDP)
Join me as we delve into the history and evolution of Internal Developer Platforms, from early scripts and Cron Jobs to the latest advancements with Kubernetes, Configuration Management tools, and Infrastructure-as-Code. We'll also glimpse into the future potential of AI in platform engineering. Let's dive in!
DevOps #InternalDeveloperPlatform #PlatformEngineering
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/internal-developer-platforms/past-present-and-future-of-internal-developer-platforms/_index.md
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 The History Of Internal Developer Platforms 04:29 What Is Internal Developer Platform? 06:10 Internal Developer Platforms of the Past 15:44 Internal Developer Platforms Today 20:47 The Future of Internal Developer Platforms
via YouTube https://www.youtube.com/watch?v=WAm3ypS0_wg
Notes on Civilization 7
https://chrisshort.net/micro/notes-on-civilization-7/
Notes on Civilization 7
via Chris Short https://chrisshort.net/
February 17, 2025
Specialized Templating - Feat. Porter, Werf, Radius, Score, PipeCD (You Choose!, Ch. 05, Ep. 05)
Specialized Templating - Choose Your Own Adventure: The Dignified Pursuit of a Developer Platform
In this episode, we'll go through tools typically used as a way to provide values that are processed by templates which, in turn, convert them into resources in the format a portal expects to have. The tools we'll explore and compare are Porter, Werf, Radius, Score, and PipeCD.
Vote for your choice of a tool for signing artifacts at https://cloud-native.slack.com/archives/C05M2NFNVRN. If you have not already joined CNCF Slack, you can do so from https://slack.cncf.io.
This and all other episodes are available at https://www.youtube.com/playlist?list=PLyicRj904Z9-FzCPvGpVHgRQVYJpVmx3Z.
More information about the "Choose Your Own Adventure" project including the source code and links to all the videos can be found at https://github.com/vfarcic/cncf-demo.
٩( ᐛ )و Whitney's YouTube Channel → https://www.youtube.com/@wiggitywhitney
Porter #Werf #Radius #Score #PipeCD
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ 🔗 CNCF Slack invite (if you’re not already there): https://communityinviter.com/apps/cloud-native/cncf 🔗 Link to #you-choose channel in CNCF Slack: https://bit.ly/3NV7nHW 🔗 Specialized Templates: https://github.com/vfarcic/cncf-demo/blob/main/manuscript/specialized-templates/README.md
via YouTube https://www.youtube.com/watch?v=TEZVeWsirsw
Week Ending February 16, 2025
https://lwkd.info/2025/20250220
Developer News
Lucy Sweet and Tim Hockin would like to hear your answers to some (not so serious) questions about Kubernetes. Submit your answers here!
CNCF’s Mentoring team is looking for Google Summer of Code mentorship tasks for GSOC 2025. If your SIG has mentors and wants to participate, please submit a PR to the 2025 plan.
Release Schedule
Next Deadline: Placeholder PRs for Docs, February 27
Enhancements freeze was last week and we have a total of 76 KEPs tracked for v1.33 after the freeze! Out of these, 30 are KEPs in alpha, 22 graduating to beta, 22 graduating to GA and 2 are deprecation/removal KEPs.
The next deadline is the Docs placeholder PRs deadline, which is on February 27th. If you have your KEP(s) tracked for the release, follow the steps here to open a placeholder PR against the dev-1.33 branch in the k/website repo soon.
KEP of the Week
KEP 3257: Cluster Trust Bundles
This KEP introduces ClusterTrustBundle, a cluster-scoped resource for certificate signers to share trust anchors with workloads, along with a clusterTrustBundle kubelet projected volume source for filesystem-based access. A default ClusterTrustBundle with the kubernetes.io/kube-apiserver-serving signer is also introduced, potentially replacing the current kube-root-ca.crt ConfigMaps.
Other Merges
kube-proxy adds new metric to track entries deleted in conntrack reconciliation
kube-proxy adds new metric to track conntrack reconciliation latency
Rewrites to network-related e2e tests to use Deployments instead of ReplicationControllers
E2E tests added for HonorPVReclaimPolicy
apiserver /flagz endpoint fixed to respond with actual parsed flags
golangci-lint removed “strict” checking
Promotions
NFTablesProxyMode to GA
Shoutouts
aojea: Shoutout to Elizabeth Martin Campos for relentless digging through the legacy e2e code and fixing an incorrect assumption that was buried there
Dipesh Rawat, the v1.33 Enhancements Lead gives big shoutouts to the v1.33 Enhancement shadows: @Arka, @eunji, @Faeka Ansari, @Jenny Shu and @lzung (extra kudos to the first-time shadows on the team :clap:) for all their hard work tracking over 90+ KEPs for the enhancement freeze!
via Last Week in Kubernetes Development https://lwkd.info/
February 20, 2025 at 05:50AM
Simplifying Kubernetes deployments with a unified Helm chart, with Calin Florescu
Managing microservices in Kubernetes at scale often leads to inconsistent deployments and maintenance overhead. This episode explores a practical solution that standardizes service deployments while maintaining team autonomy.
Calin Florescu discusses how a unified Helm chart approach can help platform teams support multiple development teams efficiently while maintaining consistent standards across services.
You will learn:
Why inconsistent Helm chart configurations across teams create maintenance challenges and slow down deployments
How to implement a unified Helm chart that balances standardization with flexibility through override functions
How to maintain quality through automated documentation and testing with tools like Helm Docs and Helm unittest
Sponsor
This episode is sponsored by Learnk8s — get started on your Kubernetes journey through comprehensive online, in-person or remote training.
More info
Find all the links and info for this episode here: https://ku.bz/mcPtH5395
Interested in sponsoring an episode? Learn more.
via KubeFM https://kube.fm
February 18, 2025 at 05:00AM