Kubernetes

Kubernetes

#eks #aws
How to rapidly scale your application with ALB on EKS (without losing traffic) | Amazon Web Services
How to rapidly scale your application with ALB on EKS (without losing traffic) | Amazon Web Services
To meet user demand, dynamic HTTP-based applications require constant scaling of Kubernetes pods. For applications exposed through Kubernetes ingress objects, the AWS Application Load Balancer (ALB) distributes incoming traffic automatically across the newly scaled replicas. When Kubernetes applications scale down due to a decline in demand, certain situations will result in brief interruptions for end […]
·aws.amazon.com·
How to rapidly scale your application with ALB on EKS (without losing traffic) | Amazon Web Services
Exploring the effect of Topology Aware Hints on network traffic in Amazon Elastic Kubernetes Service | Amazon Web Services
Exploring the effect of Topology Aware Hints on network traffic in Amazon Elastic Kubernetes Service | Amazon Web Services
Topology Aware Hints (TAH) is a feature that available in Amazon EKS version 1.24. It’s intended to provide a mechanism that attempts to keep traffic closer to its origin within the same AZ on in another location. In this post, we’ll explore how this feature can be used with Amazon EKS, its effects on how traffic is routed between pods within an Amazon EKS cluster when using multiple AZs, and whether this functionality allows Amazon EKS customers to optimize the latency and inter-AZ data transfer costs in this architecture.
·aws.amazon.com·
Exploring the effect of Topology Aware Hints on network traffic in Amazon Elastic Kubernetes Service | Amazon Web Services
Scale from 100 to 10,000 pods on Amazon EKS | Amazon Web Services
Scale from 100 to 10,000 pods on Amazon EKS | Amazon Web Services
This post was co-authored by Nikhil Sharma and Ravishen Jain of OLX Autos Introduction We, at OLX Autos run more than 100 non-production (non-prod) environments in parallel for different use-cases on home grown Internal Developer Platform (IDP), ORION. ORION runs on Amazon Elastic Kubernetes Service (Amazon EKS). Each of the Autos environment consists of at […]
·aws.amazon.com·
Scale from 100 to 10,000 pods on Amazon EKS | Amazon Web Services
Building for Cost optimization and Resilience for EKS with Spot Instances | Amazon Web Services
Building for Cost optimization and Resilience for EKS with Spot Instances | Amazon Web Services
This post is contributed by Chris Foote, Sr. EC2 Spot Specialist Solutions Architect Running your Kubernetes and containerized workloads on Amazon EC2 Spot Instances is a great way to save costs. Kubernetes is a popular open-source container management system that allows you to deploy and manage containerized applications at scale. AWS makes it easy to run […]
·aws.amazon.com·
Building for Cost optimization and Resilience for EKS with Spot Instances | Amazon Web Services
Managed node groups - Amazon EKS
Managed node groups - Amazon EKS
This is official Amazon Web Services (AWS) documentation for Amazon Elastic Kubernetes Service (Amazon EKS). Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
·docs.aws.amazon.com·
Managed node groups - Amazon EKS
Managing Pod Scheduling Constraints and Groupless Node Upgrades with Karpenter in Amazon EKS | Amazon Web Services
Managing Pod Scheduling Constraints and Groupless Node Upgrades with Karpenter in Amazon EKS | Amazon Web Services
Overview Karpenter is a high-performance Kubernetes cluster autoscaler that can help you autoscale your groupless nodes by letting you schedule layered constraints using the Provisioner API. Karpenter also makes node upgrades easy through the node expiry TTL value ttlSecondsUntilExpired. This blog post will walk you through all of the steps to make this possible, and […]
·aws.amazon.com·
Managing Pod Scheduling Constraints and Groupless Node Upgrades with Karpenter in Amazon EKS | Amazon Web Services
Topology-aware traffic routing with topology keys
Topology-aware traffic routing with topology keys
FEATURE STATE: Kubernetes v1.21 [deprecated] Note: This feature, specifically the alpha topologyKeys API, is deprecated since Kubernetes v1.21. Topology Aware Hints, introduced in Kubernetes v1.21, provide similar functionality. Service Topology enables a service to route traffic based upon the Node topology of the cluster. For example, a service can specify that traffic be preferentially routed to endpoints that are on the same Node as the client, or in the same availability zone.
·kubernetes.io·
Topology-aware traffic routing with topology keys
Topology Aware Hints
Topology Aware Hints
FEATURE STATE: Kubernetes v1.23 [beta] Topology Aware Hints enable topology aware routing by including suggestions for how clients should consume endpoints. This approach adds metadata to enable consumers of EndpointSlice and / or Endpoints objects, so that traffic to those network endpoints can be routed closer to where it originated. For example, you can route traffic within a locality to reduce costs, or to improve network performance. Motivation Kubernetes clusters are increasingly deployed in multi-zone environments.
·kubernetes.io·
Topology Aware Hints
Using Amazon EC2 Spot Instances with Karpenter | Amazon Web Services
Using Amazon EC2 Spot Instances with Karpenter | Amazon Web Services
Overview Karpenter is a dynamic, high performance cluster auto scaling solution for the Kubernetes platform introduced at re:Invent 2021. Customers choose an auto scaling solution for a number of reasons, including improving the high availability and reliability of their workloads at the same reduced costs. With the introduction of Amazon EC2 Spot Instances, customers can […]
·aws.amazon.com·
Using Amazon EC2 Spot Instances with Karpenter | Amazon Web Services
Using NodeLocal DNSCache in Kubernetes clusters
Using NodeLocal DNSCache in Kubernetes clusters
FEATURE STATE: Kubernetes v1.18 [stable] This page provides an overview of NodeLocal DNSCache feature in Kubernetes. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
·kubernetes.io·
Using NodeLocal DNSCache in Kubernetes clusters
re:Invent 2021: AWS Containers track | Amazon Web Services
re:Invent 2021: AWS Containers track | Amazon Web Services
In 2021, re:Invent offers an in-person and virtual conference experience for our attendees. The in-person part of the event will be held in Las Vegas from November 29, 2021 – December 3, 2021. Attendees for the virtual event can register for free and will have access to a subset of the sessions over the virtual […]
·aws.amazon.com·
re:Invent 2021: AWS Containers track | Amazon Web Services
Amazon VPC CNI plugin increases pods per node limits | Amazon Web Services
Amazon VPC CNI plugin increases pods per node limits | Amazon Web Services
As of August 2021, Amazon VPC Container Networking Interface (CNI) Plugin supports “prefix assignment mode”, enabling you to run more pods per node on AWS Nitro based EC2 instance types. To achieve higher pod density, the VPC CNI plugin leverages a new VPC capability that enables IP address prefixes to be associated with elastic network […]
·aws.amazon.com·
Amazon VPC CNI plugin increases pods per node limits | Amazon Web Services