
1_r/devopsish
Ep19 - Ask Me Anything About Anything with Scott Rosenberg and Ramiro Berrelleza
There are no restrictions in this AMA session. You can ask anything about DevOps, Cloud, Kubernetes, Platform Engineering, containers, or anything else. We'll have special guests Scott Rosenberg and Ramiro Berrelleza to help us out.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Codefresh 🔗 Codefresh GitOps Cloud: https://codefresh.io ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
via YouTube https://www.youtube.com/watch?v=EZ6wodGif0Q
Replacing StatefulSets with a custom Kubernetes operator in our Postgres cloud platform, with Andrew Charlton
Discover why standard Kubernetes StatefulSets might not be sufficient for your database workloads and how custom operators can provide better solutions for stateful applications.
Andrew Charlton, Staff Software Engineer at Timescale, explains how they replaced Kubernetes StatefulSets with a custom operator called Popper for their PostgreSQL Cloud Platform. He details the technical limitations they encountered with StatefulSets and how their custom approach provides more intelligent management of database clusters.
You will learn:
Why StatefulSets fall short for managing high-availability PostgreSQL clusters, particularly around pod ordering and volume management
How Timescale's instance matching approach solves complex reconciliation challenges when managing heterogeneous database workloads
The benefits of implementing discrete, idempotent actions rather than workflows in Kubernetes operators
Real-world examples of operations that became possible with their custom operator, including volume downsizing and availability zone consolidation
Sponsor
This episode is brought to you by mirrord — run local code like in your Kubernetes cluster without deploying first.
More info
Find all the links and info for this episode here: https://ku.bz/fhZ_pNXM3
Interested in sponsoring an episode? Learn more.
via KubeFM https://kube.fm
April 22, 2025 at 08:59AM
Kubernetes Multicontainer Pods: An Overview
https://kubernetes.io/blog/2025/04/22/multi-container-pods-overview/
As cloud-native architectures continue to evolve, Kubernetes has become the go-to platform for deploying complex, distributed systems. One of the most powerful yet nuanced design patterns in this ecosystem is the sidecar pattern—a technique that allows developers to extend application functionality without diving deep into source code.
The origins of the sidecar pattern
Think of a sidecar like a trusty companion motorcycle attachment. Historically, IT infrastructures have always used auxiliary services to handle critical tasks. Before containers, we relied on background processes and helper daemons to manage logging, monitoring, and networking. The microservices revolution transformed this approach, making sidecars a structured and intentional architectural choice. With the rise of microservices, the sidecar pattern became more clearly defined, allowing developers to offload specific responsibilities from the main service without altering its code. Service meshes like Istio and Linkerd have popularized sidecar proxies, demonstrating how these companion containers can elegantly handle observability, security, and traffic management in distributed systems.
Kubernetes implementation
In Kubernetes, sidecar containers operate within the same Pod as the main application, enabling communication and resource sharing. Does this sound just like defining multiple containers along each other inside the Pod? It actually does, and this is how sidecar containers had to be implemented before Kubernetes v1.29.0, which introduced native support for sidecars. Sidecar containers can now be defined within a Pod manifest using the spec.initContainers field. What makes it a sidecar container is that you specify it with restartPolicy: Always. You can see an example of this below, which is a partial snippet of the full Kubernetes manifest:
initContainers:
- name: logshipper image: alpine:latest restartPolicy: Always command: ['sh', '-c', 'tail -F /opt/logs.txt'] volumeMounts:
- name: data mountPath: /opt
That field name, spec.initContainers may sound confusing. How come when you want to define a sidecar container, you have to put an entry in the spec.initContainers array? spec.initContainers are run to completion just before main application starts, so they’re one-off, whereas sidecars often run in parallel to the main app container. It’s the spec.initContainers with restartPolicy:Always which differs classic init containers from Kubernetes-native sidecar containers and ensures they are always up.
When to embrace (or avoid) sidecars
While the sidecar pattern can be useful in many cases, it is generally not the preferred approach unless the use case justifies it. Adding a sidecar increases complexity, resource consumption, and potential network latency. Instead, simpler alternatives such as built-in libraries or shared infrastructure should be considered first.
Deploy a sidecar when:
You need to extend application functionality without touching the original code
Implementing cross-cutting concerns like logging, monitoring or security
Working with legacy applications requiring modern networking capabilities
Designing microservices that demand independent scaling and updates
Proceed with caution if:
Resource efficiency is your primary concern
Minimal network latency is critical
Simpler alternatives exist
You want to minimize troubleshooting complexity
Four essential multi-container patterns
Init container pattern
The Init container pattern is used to execute (often critical) setup tasks before the main application container starts. Unlike regular containers, init containers run to completion and then terminate, ensuring that preconditions for the main application are met.
Ideal for:
Preparing configurations
Loading secrets
Verifying dependency availability
Running database migrations
The init container ensures your application starts in a predictable, controlled environment without code modifications.
Ambassador pattern
An ambassador container provides Pod-local helper services that expose a simple way to access a network service. Commonly, ambassador containers send network requests on behalf of a an application container and take care of challenges such as service discovery, peer identity verification, or encryption in transit.
Perfect when you need to:
Offload client connectivity concerns
Implement language-agnostic networking features
Add security layers like TLS
Create robust circuit breakers and retry mechanisms
Configuration helper
A configuration helper sidecar provides configuration updates to an application dynamically, ensuring it always has access to the latest settings without disrupting the service. Often the helper needs to provide an initial configuration before the application would be able to start successfully.
Use cases:
Fetching environment variables and secrets
Polling configuration changes
Decoupling configuration management from application logic
Adapter pattern
An adapter (or sometimes façade) container enables interoperability between the main application container and external services. It does this by translating data formats, protocols, or APIs.
Strengths:
Transforming legacy data formats
Bridging communication protocols
Facilitating integration between mismatched services
Wrap-up
While sidecar patterns offer tremendous flexibility, they're not a silver bullet. Each added sidecar introduces complexity, consumes resources, and potentially increases operational overhead. Always evaluate simpler alternatives first. The key is strategic implementation: use sidecars as precision tools to solve specific architectural challenges, not as a default approach. When used correctly, they can improve security, networking, and configuration management in containerized environments. Choose wisely, implement carefully, and let your sidecars elevate your container ecosystem.
via Kubernetes Blog https://kubernetes.io/
April 21, 2025 at 08:00PM
Mirrord Magic: Write Code Locally, See It Remotely!
Learn how to develop applications locally while integrating with remote production-like environments using mirrord. We'll demonstrate how to mirror and steal requests, connect to remote databases, and set up filtering to ensure a seamless development process without impacting others. Follow along as we configure and run mirrord, leveraging its capabilities to create an efficient and isolated development environment. This video will help you optimize your development workflow. Watch now to see mirrord in action!
Development #Kubernetes #mirrord
Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join
▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/development/mirrord-magic-write-code-locally,-see-it-remotely 🔗 mirrord: https://mirrord.dev 🔗 UpCloud: https://upcloud.com
▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below).
▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/
▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Development Environments 02:17 Setting The Scene (Staging Environment) 06:24 Development Environments with mirrord 13:34 Stealing Requests with mirrord 15:15 Filtering Requests with mirrord 17:52 What Else? 19:32 mirrord Pros and Cons
via YouTube https://www.youtube.com/watch?v=NLa0K5mybzo
Week Ending April 13, 2025
https://lwkd.info/2025/20250416
Developer News
The next New Contributor Orientation will take place on Tuesday, April 22.
LFX Mentorship 2025 Term 2 is open for SIGs to submit projects for mentorship. To propose a mentoring project, PR it into project_ideas. If you have questions about LFX mentorship, ask in #sig-contribex.
All of the current SIG Scheduling leads are stepping down. They have nominated chairs Kensei Nakada and Maciej Skoczeń and TLs Kensei Nakada and Dominik Marciński to take their places.
Filip Křepinský, supported by SIG-Node, has proposed creating a Node Lifecycle Working Group.
Release Schedule
Next Deadline: Release day, 23 April
We are currently in Code Freeze.
Kubernetes v1.33.0-rc.1 is now avaliable for testing.
Due to the Release Managers’ availability and a conflict with the v1.33.0-rc.1 release, the April 2025 patch releases have been postponed to the next week (Tuesday, April 22).
KEP of the Week
KEP 1769: Memory Manager
This KEP defined the Memory Manager, which has enabled guaranteed memory and hugepages allocation for pods in the Guaranteed QoS class. It supports both single-NUMA and multi-NUMA strategies, ensuring memory is allocated from the minimal number of NUMA nodes. The component provides NUMA affinity hints to the Topology Manager to support optimal pod placement. While single-NUMA is a special case of multi-NUMA, both strategies are described to highlight different use cases.
This KEP was implemented in Kubernetes 1.32.
Shoutouts
Nina Polshakova: Shout out to the amazing v1.33 Docs team for a smooth Docs freeze this week and all your hard work this release! rayandas, Melony Q. (aka.cloudmelon ), Sreeram Venkitesh, Urvashi, Arvind Parekh, Michelle Nguyen, Shedrack Akintayo
via Last Week in Kubernetes Development https://lwkd.info/
April 16, 2025 at 06:00PM