1_r/devopsish

1_r/devopsish

54497 bookmarks
Custom sorting
Project Bluefin Tour on Framework Laptop | LinkedIn
Project Bluefin Tour on Framework Laptop | LinkedIn
Project Bluefin is a new Linux distribution designed for reliability, performance, and sustainability. Bluefin is built with the Cloud Native Desktop model. Jorge Castro, the creator of Universal Blue, joins Chris Short as Chris sets up a Framework Laptop to help contributing to the project.
·linkedin.com·
Project Bluefin Tour on Framework Laptop | LinkedIn
F.C.C. Votes to Restore Net Neutrality Rules (Gift Article)
F.C.C. Votes to Restore Net Neutrality Rules (Gift Article)
Commissioners voted along party lines to revive the rules that declare broadband as a utility-like service that could be regulated like phones and water.
·nytimes.com·
F.C.C. Votes to Restore Net Neutrality Rules (Gift Article)
Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier
Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier

Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier

https://kubernetes.io/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/

With Kubernetes 1.30, we (SIG Auth) are moving Structured Authorization Configuration to beta.

Today's article is about authorization: deciding what someone can and cannot access. Check a previous article from yesterday to find about what's new in Kubernetes v1.30 around authentication (finding out who's performing a task, and checking that they are who they say they are).

Introduction

Kubernetes continues to evolve to meet the intricate requirements of system administrators and developers alike. A critical aspect of Kubernetes that ensures the security and integrity of the cluster is the API server authorization. Until recently, the configuration of the authorization chain in kube-apiserver was somewhat rigid, limited to a set of command-line flags and allowing only a single webhook in the authorization chain. This approach, while functional, restricted the flexibility needed by cluster administrators to define complex, fine-grained authorization policies. The latest Structured Authorization Configuration feature (KEP-3221) aims to revolutionize this aspect by introducing a more structured and versatile way to configure the authorization chain, focusing on enabling multiple webhooks and providing explicit control mechanisms.

The Need for Improvement

Cluster administrators have long sought the ability to specify multiple authorization webhooks within the API Server handler chain and have control over detailed behavior like timeout and failure policy for each webhook. This need arises from the desire to create layered security policies, where requests can be validated against multiple criteria or sets of rules in a specific order. The previous limitations also made it difficult to dynamically configure the authorizer chain, leaving no room to manage complex authorization scenarios efficiently.

The Structured Authorization Configuration feature addresses these limitations by introducing a configuration file format to configure the Kubernetes API Server Authorization chain. This format allows specifying multiple webhooks in the authorization chain (all other authorization types are specified no more than once). Each webhook authorizer has well-defined parameters, including timeout settings, failure policies, and conditions for invocation with CEL rules to pre-filter requests before they are dispatched to webhooks, helping you prevent unnecessary invocations. The configuration also supports automatic reloading, ensuring changes can be applied dynamically without restarting the kube-apiserver. This feature addresses current limitations and opens up new possibilities for securing and managing Kubernetes clusters more effectively.

Sample Configurations

Here is a sample structured authorization configuration along with descriptions for all fields, their defaults, and possible values.

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthorizationConfiguration authorizers:

  • type: Webhook # Name used to describe the authorizer # This is explicitly used in monitoring machinery for metrics # Note: # - Validation for this field is similar to how K8s labels are validated today. # Required, with no default name: webhook webhook: # The duration to cache 'authorized' responses from the webhook # authorizer. # Same as setting --authorization-webhook-cache-authorized-ttl flag # Default: 5m0s authorizedTTL: 30s # The duration to cache 'unauthorized' responses from the webhook # authorizer. # Same as setting --authorization-webhook-cache-unauthorized-ttl flag # Default: 30s unauthorizedTTL: 30s # Timeout for the webhook request # Maximum allowed is 30s. # Required, with no default. timeout: 3s # The API version of the authorization.k8s.io SubjectAccessReview to # send to and expect from the webhook. # Same as setting --authorization-webhook-version flag # Required, with no default # Valid values: v1beta1, v1 subjectAccessReviewVersion: v1 # MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview # version the CEL expressions are evaluated against # Valid values: v1 # Required, no default value matchConditionSubjectAccessReviewVersion: v1 # Controls the authorization decision when a webhook request fails to # complete or returns a malformed response or errors evaluating # matchConditions. # Valid values: # - NoOpinion: continue to subsequent authorizers to see if one of # them allows the request # - Deny: reject the request without consulting subsequent authorizers # Required, with no default. failurePolicy: Deny connectionInfo: # Controls how the webhook should communicate with the server. # Valid values: # - KubeConfig: use the file specified in kubeConfigFile to locate the # server. # - InClusterConfig: use the in-cluster configuration to call the # SubjectAccessReview API hosted by kube-apiserver. This mode is not # allowed for kube-apiserver. type: KubeConfig # Path to KubeConfigFile for connection info # Required, if connectionInfo.Type is KubeConfig kubeConfigFile: /kube-system-authz-webhook.yaml # matchConditions is a list of conditions that must be met for a request to be sent to this # webhook. An empty list of matchConditions matches all requests. # There are a maximum of 64 match conditions allowed. # # The exact matching logic is (in order): # 1. If at least one matchCondition evaluates to FALSE, then the webhook is skipped. # 2. If ALL matchConditions evaluate to TRUE, then the webhook is called. # 3. If at least one matchCondition evaluates to an error (but none are FALSE): # - If failurePolicy=Deny, then the webhook rejects the request # - If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped matchConditions: # expression represents the expression which will be evaluated by CEL. Must evaluate to bool. # CEL expressions have access to the contents of the SubjectAccessReview in v1 version. # If version specified by subjectAccessReviewVersion in the request variable is v1beta1, # the contents would be converted to the v1 version before evaluating the CEL expression. # # Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ # # only send resource requests to the webhook
  • expression: has(request.resourceAttributes) # only intercept requests to kube-system
  • expression: request.resourceAttributes.namespace == 'kube-system' # don't intercept requests from kube-system service accounts
  • expression: !('system:serviceaccounts:kube-system' in request.user.groups)
  • type: Node name: node
  • type: RBAC name: rbac
  • type: Webhook name: in-cluster-authorizer webhook: authorizedTTL: 5m unauthorizedTTL: 30s timeout: 3s subjectAccessReviewVersion: v1 failurePolicy: NoOpinion connectionInfo: type: InClusterConfig

The following configuration examples illustrate real-world scenarios that need the ability to specify multiple webhooks with distinct settings, precedence order, and failure modes.

Protecting Installed CRDs

Ensuring of Custom Resource Definitions (CRDs) availability at cluster startup has been a key demand. One of the blockers of having a controller reconcile those CRDs is having a protection mechanism for them, which can be achieved through multiple authorization webhooks. This was not possible before as specifying multiple authorization webhooks in the Kubernetes API Server authorization chain was simply not possible. Now, with the Structured Authorization Configuration feature, administrators can specify multiple webhooks, offering a solution where RBAC falls short, especially when denying permissions to 'non-system' users for certain CRDs.

Assuming the following for this scenario:

The "protected" CRDs are installed.

They can only be modified by users in the group admin.

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthorizationConfiguration authorizers:

  • type: Webhook name: system-crd-protector webhook: unauthorizedTTL: 30s timeout: 3s subjectAccessReviewVersion: v1 matchConditionSubjectAccessReviewVersion: v1 failurePolicy: Deny connectionInfo: type: KubeConfig kubeConfigFile: /files/kube-system-authz-webhook.yaml matchConditions: # only send resource requests to the webhook
  • expression: has(request.resourceAttributes) # only intercept requests for CRDs
  • expression: request.resourceAttributes.resource.resource = "customresourcedefinitions"
  • expression: request.resourceAttributes.resource.group = "" # only intercept update, patch, delete, or deletecollection requests
  • expression: request.resourceAttributes.verb in ['update', 'patch', 'delete','deletecollection']
  • type: Node
  • type: RBAC

Preventing unnecessarily nested webhooks

A system administrator wants to apply specific validations to requests before handing them off to webhooks using frameworks like Open Policy Agent. In the past, this would require running nested webhooks within the one added to the authorization chain to achieve the desired result. The Structured Authorization Configuration feature simplifies this process, offering a structured API to selectively trigger additional webhooks when needed. It also enables administrators to set distinct failure policies for each webhook, ensuring more consistent and predictable responses.

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthorizationConfiguration authorizers:

  • type: Webhook name: system-crd-protector webhook: unauthorizedTTL: 30s timeout: 3s subjectAccessReviewVersion: v1 matchConditionSubjectAccessReviewVersion: v1 failurePolicy: Deny connectionInfo: type: KubeConfig kubeConfigFile: /files/kube-system-authz-webhook.yaml matchConditions: # only send resource requests to the webhook
  • expression: has(request.resourceAttributes) # only intercept requests for CRDs
  • expression: request.resourceAttributes.re
·kubernetes.io·
Kubernetes 1.30: Multi-Webhook and Modular Authorization Made Much Easier
Kubernetes 1.30: Structured Authentication Configuration Moves to Beta
Kubernetes 1.30: Structured Authentication Configuration Moves to Beta

Kubernetes 1.30: Structured Authentication Configuration Moves to Beta

https://kubernetes.io/blog/2024/04/25/structured-authentication-moves-to-beta/

With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta.

Today's article is about authentication: finding out who's performing a task, and checking that they are who they say they are. Check back in tomorrow to find about what's new in Kubernetes v1.30 around authorization (deciding what someone can and can't access).

Motivation

Kubernetes has had a long-standing need for a more flexible and extensible authentication system. The current system, while powerful, has some limitations that make it difficult to use in certain scenarios. For example, it is not possible to use multiple authenticators of the same type (e.g., multiple JWT authenticators) or to change the configuration without restarting the API server. The Structured Authentication Configuration feature is the first step towards addressing these limitations and providing a more flexible and extensible way to configure authentication in Kubernetes.

What is structured authentication configuration?

Kubernetes v1.30 builds on the experimental support for configurating authentication based on a file, that was added as alpha in Kubernetes v1.30. At this beta stage, Kubernetes only supports configuring JWT authenticators, which serve as the next iteration of the existing OIDC authenticator. JWT authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens. The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured issuer.

The Kubernetes project added configuration from a file so that it can provide more flexibility than using command line options (which continue to work, and are still supported). Supporting a configuration file also makes it easy to deliver further improvements in upcoming releases.

Benefits of structured authentication configuration

Here's why using a configuration file to configure cluster authentication is a benefit:

Multiple JWT authenticators: You can configure multiple JWT authenticators simultaneously. This allows you to use multiple identity providers (e.g., Okta, Keycloak, GitLab) without needing to use an intermediary like Dex that handles multiplexing between multiple identity providers.

Dynamic configuration: You can change the configuration without restarting the API server. This allows you to add, remove, or modify authenticators without disrupting the API server.

Any JWT-compliant token: You can use any JWT-compliant token for authentication. This allows you to use tokens from any identity provider that supports JWT. The minimum valid JWT payload must contain the claims documented in structured authentication configuration page in the Kubernetes documentation.

CEL (Common Expression Language) support: You can use CEL to determine whether the token's claims match the user's attributes in Kubernetes (e.g., username, group). This allows you to use complex logic to determine whether a token is valid.

Multiple audiences: You can configure multiple audiences for a single authenticator. This allows you to use the same authenticator for multiple audiences, such as using a different OAuth client for kubectl and dashboard.

Using identity providers that don't support OpenID connect discovery: You can use identity providers that don't support OpenID Connect discovery. The only requirement is to host the discovery document at a different location than the issuer (such as locally in the cluster) and specify the issuer.discoveryURL in the configuration file.

How to use Structured Authentication Configuration

To use structured authentication configuration, you specify the path to the authentication configuration using the --authentication-config command line argument in the API server. The configuration file is a YAML file that specifies the authenticators and their configuration. Here is an example configuration file that configures two JWT authenticators:

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration

Someone with a valid token from either of these issuers could authenticate

against this cluster.

jwt:

  • issuer: url: https://issuer1.example.com audiences:
    • audience1
    • audience2 audienceMatchPolicy: MatchAny claimValidationRules: expression: 'claims.hd == "example.com"' message: "the hosted domain name must be example.com" claimMappings: username: expression: 'claims.username' groups: expression: 'claims.groups' uid: expression: 'claims.uid' extra:
    • key: 'example.com/tenant' expression: 'claims.tenant' userValidationRules:
    • expression: "!user.username.startsWith('system:')" message: "username cannot use reserved system: prefix" # second authenticator that exposes the discovery document at a different location # than the issuer
  • issuer: url: https://issuer2.example.com discoveryURL: https://discovery.example.com/.well-known/openid-configuration audiences:
    • audience3
    • audience4 audienceMatchPolicy: MatchAny claimValidationRules: expression: 'claims.hd == "example.com"' message: "the hosted domain name must be example.com" claimMappings: username: expression: 'claims.username' groups: expression: 'claims.groups' uid: expression: 'claims.uid' extra:
    • key: 'example.com/tenant' expression: 'claims.tenant' userValidationRules:
    • expression: "!user.username.startsWith('system:')" message: "username cannot use reserved system: prefix"

Migration from command line arguments to configuration file

The Structured Authentication Configuration feature is designed to be backwards-compatible with the existing approach, based on command line options, for configuring the JWT authenticator. This means that you can continue to use the existing command-line options to configure the JWT authenticator. However, we (Kubernetes SIG Auth) recommend migrating to the new configuration file-based approach, as it provides more flexibility and extensibility.

Note

If you specify --authentication-config along with any of the --oidc-* command line arguments, this is a misconfiguration. In this situation, the API server reports an error and then immediately exits.

If you want to switch to using structured authentication configuration, you have to remove the --oidc-* command line arguments, and use the configuration file instead.

Here is an example of how to migrate from the command-line flags to the configuration file:

Command-line arguments

--oidc-issuer-url=https://issuer.example.com --oidc-client-id=example-client-id --oidc-username-claim=username --oidc-groups-claim=groups --oidc-username-prefix=oidc: --oidc-groups-prefix=oidc: --oidc-required-claim="hd=example.com" --oidc-required-claim="admin=true" --oidc-ca-file=/path/to/ca.pem

There is no equivalent in the configuration file for the --oidc-signing-algs. For Kubernetes v1.30, the authenticator supports all the asymmetric algorithms listed in oidc.go.

Configuration file

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt:

  • issuer: url: https://issuer.example.com audiences:
    • example-client-id certificateAuthority: <value is the content of file /path/to/ca.pem> claimMappings: username: claim: username prefix: "oidc:" groups: claim: groups prefix: "oidc:" claimValidationRules:
    • claim: hd requiredValue: "example.com"
    • claim: admin requiredValue: "true"

What's next?

For Kubernetes v1.31, we expect the feature to stay in beta while we get more feedback. In the coming releases, we want to investigate:

Making distributed claims work via CEL expressions.

Egress selector configuration support for calls to issuer.url and issuer.discoveryURL.

You can learn more about this feature on the structured authentication configuration page in the Kubernetes documentation. You can also follow along on the KEP-3331 to track progress across the coming Kubernetes releases.

Try it out

In this post, I have covered the benefits the Structured Authentication Configuration feature brings in Kubernetes v1.30. To use this feature, you must specify the path to the authentication configuration using the --authentication-config command line argument. From Kubernetes v1.30, the feature is in beta and enabled by default. If you want to keep using command line arguments instead of a configuration file, those will continue to work as-is.

We would love to hear your feedback on this feature. Please reach out to us on the

sig-auth-authenticators-dev

channel on Kubernetes Slack (for an invitation, visit https://slack.k8s.io/).

How to get involved

If you are interested in getting involved in the development of this feature, share feedback, or participate in any other ongoing SIG Auth projects, please reach out on the #sig-auth channel on Kubernetes Slack.

You are also welcome to join the bi-weekly SIG Auth meetings held every-other Wednesday.

via Kubernetes Blog https://kubernetes.io/

April 24, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.30: Structured Authentication Configuration Moves to Beta
GitHub comments abused to push malware via Microsoft repo URLs
GitHub comments abused to push malware via Microsoft repo URLs
A GitHub flaw, or possibly a design decision, is being abused by threat actors to distribute malware using URLs associated with a Microsoft repository, making the files appear trustworthy.
·bleepingcomputer.com·
GitHub comments abused to push malware via Microsoft repo URLs
ioquake3
ioquake3
Play Quake 3, mods, new games, or make your own!
·ioquake3.org·
ioquake3
id-Software/Quake: Quake GPL Source Release
id-Software/Quake: Quake GPL Source Release
Quake GPL Source Release. Contribute to id-Software/Quake development by creating an account on GitHub.
·github.com·
id-Software/Quake: Quake GPL Source Release
Tetragon - eBPF-based Security Observability and Runtime Enforcement
Tetragon - eBPF-based Security Observability and Runtime Enforcement
Powerful, extensible, and feature-packed frontend toolkit. Build and customize with Sass, utilize prebuilt grid system and components, and bring projects to life with powerful JavaScript plugins.
·tetragon.io·
Tetragon - eBPF-based Security Observability and Runtime Enforcement
awslabs/s3-connector-for-pytorch: The Amazon S3 Connector for PyTorch delivers high throughput for PyTorch training jobs that access and store data in Amazon S3.
awslabs/s3-connector-for-pytorch: The Amazon S3 Connector for PyTorch delivers high throughput for PyTorch training jobs that access and store data in Amazon S3.
The Amazon S3 Connector for PyTorch delivers high throughput for PyTorch training jobs that access and store data in Amazon S3. - awslabs/s3-connector-for-pytorch
·github.com·
awslabs/s3-connector-for-pytorch: The Amazon S3 Connector for PyTorch delivers high throughput for PyTorch training jobs that access and store data in Amazon S3.
botocove
botocove
A decorator to allow running a function against all AWS accounts in an organization
·pypi.org·
botocove
Framework’s Series A-1 and Community Participation
Framework’s Series A-1 and Community Participation
Today we’re announcing $18M in new funding from an incredible set of investors, with a $17M Series A-1 round led by Spark Capital, with Buckley Ventures, Anzu Partners, Cooler Master, and Pathbreaker Ventures participating. It’s ultimately your belief in our mission and products that drives our
·frame.work·
Framework’s Series A-1 and Community Participation
Which of these stages have you experienced before or are you currently in? Comment below👇 The term ‘burnout’ was first mentioned by… | Instagram
Which of these stages have you experienced before or are you currently in? Comment below👇 The term ‘burnout’ was first mentioned by… | Instagram
51K likes, 632 comments - thepresentpsychologist on February 21, 2024: "Which of these stages have you experienced before or are you currently in? Comment below👇 The term ‘burnout’ was first mentioned ...".
·instagram.com·
Which of these stages have you experienced before or are you currently in? Comment below👇 The term ‘burnout’ was first mentioned by… | Instagram
cookiecutter/cookiecutter: A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects.
cookiecutter/cookiecutter: A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects.
A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects. - cookiecutter/cookiecutter
·github.com·
cookiecutter/cookiecutter: A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects.
Bluefin is now Generally Available - Bluefin and Aurora - Universal Blue
Bluefin is now Generally Available - Bluefin and Aurora - Universal Blue
That’s a mouthful! Project Bluefin has been my passion project for going on three years now, and thanks to a bunch of awesome people we feel it’s time to move to general availability (GA) and out of beta. Our young dromeasaur is ready! Download the ISOs here This Fedora 39-based version (which we call gts) and it’s been baking for 6 months. It is designed to be installed on a device and follow Fedora’s releases in perpetuity – we accomplish this by maintaining the image in GitHub as a comm...
·universal-blue.discourse.group·
Bluefin is now Generally Available - Bluefin and Aurora - Universal Blue
Kubernetes 1.30: Validating Admission Policy Is Generally Available
Kubernetes 1.30: Validating Admission Policy Is Generally Available

Kubernetes 1.30: Validating Admission Policy Is Generally Available

https://kubernetes.io/blog/2024/04/24/validating-admission-policy-ga/

On behalf of the Kubernetes project, I am excited to announce that ValidatingAdmissionPolicy has reached general availability as part of Kubernetes 1.30 release. If you have not yet read about this new declarative alternative to validating admission webhooks, it may be interesting to read our previous post about the new feature. If you have already heard about ValidatingAdmissionPolicies and you are eager to try them out, there is no better time to do it than now.

Let's have a taste of a ValidatingAdmissionPolicy, by replacing a simple webhook.

Example admission webhook

First, let's take a look at an example of a simple webhook. Here is an excerpt from a webhook that enforces runAsNonRoot, readOnlyRootFilesystem, allowPrivilegeEscalation, and privileged to be set to the least permissive values.

func verifyDeployment(deploy *appsv1.Deployment) error { var errs []error for i, c := range deploy.Spec.Template.Spec.Containers { if c.Name "" { return fmt.Errorf("container %d has no name", i) } if c.SecurityContext nil { errs = append(errs, fmt.Errorf("container %q does not have SecurityContext", c.Name)) } if c.SecurityContext.RunAsNonRoot nil || !*c.SecurityContext.RunAsNonRoot { errs = append(errs, fmt.Errorf("container %q must set RunAsNonRoot to true in its SecurityContext", c.Name)) } if c.SecurityContext.ReadOnlyRootFilesystem nil || !*c.SecurityContext.ReadOnlyRootFilesystem { errs = append(errs, fmt.Errorf("container %q must set ReadOnlyRootFilesystem to true in its SecurityContext", c.Name)) } if c.SecurityContext.AllowPrivilegeEscalation != nil && *c.SecurityContext.AllowPrivilegeEscalation { errs = append(errs, fmt.Errorf("container %q must NOT set AllowPrivilegeEscalation to true in its SecurityContext", c.Name)) } if c.SecurityContext.Privileged != nil && *c.SecurityContext.Privileged { errs = append(errs, fmt.Errorf("container %q must NOT set Privileged to true in its SecurityContext", c.Name)) } } return errors.NewAggregate(errs) }

Check out What are admission webhooks? Or, see the full code of this webhook to follow along with this walkthrough.

The policy

Now let's try to recreate the validation faithfully with a ValidatingAdmissionPolicy.

apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: "pod-security.policy.example.com" spec: failurePolicy: Fail matchConstraints: resourceRules:

  • apiGroups: ["apps"] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["deployments"] validations:
  • expression: object.spec.template.spec.containers.all(c, has(c.securityContext) && has(c.securityContext.runAsNonRoot) && c.securityContext.runAsNonRoot) message: 'all containers must set runAsNonRoot to true'
  • expression: object.spec.template.spec.containers.all(c, has(c.securityContext) && has(c.securityContext.readOnlyRootFilesystem) && c.securityContext.readOnlyRootFilesystem) message: 'all containers must set readOnlyRootFilesystem to true'
  • expression: object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation) message: 'all containers must NOT set allowPrivilegeEscalation to true'
  • expression: object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged) message: 'all containers must NOT set privileged to true'

Create the policy with kubectl. Great, no complain so far. But let's get the policy object back and take a look at its status.

kubectl get -oyaml validatingadmissionpolicies/pod-security.policy.example.com

status: typeChecking: expressionWarnings:

  • fieldRef: spec.validations[3].expression warning: | apps/v1, Kind=Deployment: ERROR: <input>:1:76: undefined field 'Privileged' | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged) | ...........................................................................^ ERROR: <input>:1:128: undefined field 'Privileged' | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged) | ...............................................................................................................................^

The policy was checked against its matched type, which is apps/v1.Deployment. Looking at the fieldRef, the problem was with the 3rd expression (index starts with 0) The expression in question accessed an undefined Privileged field. Ahh, looks like it was a copy-and-paste error. The field name should be in lowercase.

apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: "pod-security.policy.example.com" spec: failurePolicy: Fail matchConstraints: resourceRules:

  • apiGroups: ["apps"] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["deployments"] validations:
  • expression: object.spec.template.spec.containers.all(c, has(c.securityContext) && has(c.securityContext.runAsNonRoot) && c.securityContext.runAsNonRoot) message: 'all containers must set runAsNonRoot to true'
  • expression: object.spec.template.spec.containers.all(c, has(c.securityContext) && has(c.securityContext.readOnlyRootFilesystem) && c.securityContext.readOnlyRootFilesystem) message: 'all containers must set readOnlyRootFilesystem to true'
  • expression: object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation) message: 'all containers must NOT set allowPrivilegeEscalation to true'
  • expression: object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.privileged) || !c.securityContext.privileged) message: 'all containers must NOT set privileged to true'

Check its status again, and you should see all warnings cleared.

Next, let's create a namespace for our tests.

kubectl create namespace policy-test

Then, I bind the policy to the namespace. But at this point, I set the action to Warn so that the policy prints out warnings instead of rejecting the requests. This is especially useful to collect results from all expressions during development and automated testing.

apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicyBinding metadata: name: "pod-security.policy-binding.example.com" spec: policyName: "pod-security.policy.example.com" validationActions: ["Warn"] matchResources: namespaceSelector: matchLabels: "kubernetes.io/metadata.name": "policy-test"

Tests out policy enforcement.

kubectl create -n policy-test -f- <<EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:

  • image: nginx name: nginx securityContext: privileged: true allowPrivilegeEscalation: true EOF

Warning: Validation failed for ValidatingAdmissionPolicy 'pod-security.policy.example.com' with binding 'pod-security.policy-binding.example.com': all containers must set runAsNonRoot to true Warning: Validation failed for ValidatingAdmissionPolicy 'pod-security.policy.example.com' with binding 'pod-security.policy-binding.example.com': all containers must set readOnlyRootFilesystem to true Warning: Validation failed for ValidatingAdmissionPolicy 'pod-security.policy.example.com' with binding 'pod-security.policy-binding.example.com': all containers must NOT set allowPrivilegeEscalation to true Warning: Validation failed for ValidatingAdmissionPolicy 'pod-security.policy.example.com' with binding 'pod-security.policy-binding.example.com': all containers must NOT set privileged to true Error from server: error when creating "STDIN": admission webhook "webhook.example.com" denied the request: [container "nginx" must set RunAsNonRoot to true in its SecurityContext, container "nginx" must set ReadOnlyRootFilesystem to true in its SecurityContext, container "nginx" must NOT set AllowPrivilegeEscalation to true in its SecurityContext, container "nginx" must NOT set Privileged to true in its SecurityContext]

Looks great! The policy and the webhook give equivalent results. After a few other cases, when we are confident with our policy, maybe it is time to do some cleanup.

For every expression, we repeat access to object.spec.template.spec.containers and to each securityContext;

There is a pattern of checking presence of a field and then accessing it, which looks a bit verbose.

Fortunately, since Kubernetes 1.28, we have new solutions for both issues. Variable Composition allows us to extract repeated sub-expressions into their own variables. Kubernetes enables the optional library for CEL, which are excellent to work with fields that are, you guessed it, optional.

With both features in mind, let's refactor the policy a bit.

apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: "pod-security.policy.example.com" spec: failurePolicy: Fail matchConstraints: resourceRules:

  • apiGroups: ["apps"] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["deployments"] variables:
  • name: containers expression: object.spec.template.spec.containers
  • name: securityContexts expression: 'variables.containers.map(c, c.?securityContext)' validations:
  • expression: variables.securityContexts.all(c, c.?runAsNonRoot == optional.of(true)) message: 'all containers must set runAsNonRoot to true'
  • expression: variables.securityContexts.all(c, c.?readOnlyRootFilesystem == optional.of(true)) message: 'all containers must set readOnlyRootFilesystem to true'
  • expression: variables.securityContexts.all(c, c.?allo
·kubernetes.io·
Kubernetes 1.30: Validating Admission Policy Is Generally Available
Week Ending April 21 2024
Week Ending April 21 2024

Week Ending April 21, 2024

https://lwkd.info/2024/20240423

Developer News

Kubernetes v1.30: Uwubernetes was released! Major features includes Go workspaces, Pod Scheduling Readiness, VolumeManager reconstruction after kubelet restart, Node log query and more. Read more in the announcement blog post and the release notes.

Release Schedule

Next Deadline: 1.31 Cycle Begins, April 2024

We are in the period between releases right now. Dates for 1.31 have not been published yet.

Featured PR

123905: # Field selector for Services based on ClusterIP and Type

In clusters with unusually large numbers of headless Services (i.e. Services without a cluster IP), it can cause memory bloat in the Kubelet as it has to cache these as part of the API informer. This PR extends the Service API to allow filtering on both clusterIP and type, both improving the memory usage of the Kubelet and decreasing load on the API. While this specific optimization only helps a niche audience, it’s worth reinforcing how this technique can be applied elsewhere. When optimizing any controller, always keep an eye open for how API watch traffic could be mitigated with server-side logic or filters. Creating field selectors is easy and streamlined, and can likely be used in many more such optimizations.

KEP of the Week

KEP 3521: Pod Scheduling Readiness

This KEP proposes to add an API to mark Pods as ready or paused for scheduling so that the scheduler is not wasting cycles retrying to schedule Pods that are determined to be unschedulable. The KEP adds APIs for users and controllers to control when a Pod is ready to be considered for scheduling. This is done with the new .spec.schedulingGate field to the Pod API. Pods will only be attempted to be scheduled to a Node by the scheduler when .spec.schedulingGate key is nil. A new Enqueue extension point is also added to customize Pod queueing behaviour.

This KEP graduated to stable in the v1.30 release.

Other Merges

*

Promotions

*

Deprecated

*

Version Updates

*

Subprojects and Dependency Updates

*

via Last Week in Kubernetes Development https://lwkd.info/

April 23, 2024 at 06:00PM

·lwkd.info·
Week Ending April 21 2024
Defining the Role of Distinguished Engineers | AWS Executive Insights
Defining the Role of Distinguished Engineers | AWS Executive Insights
Watch as Clarke Rodgers, Director of AWS Enterprise Strategy, interviews Paul about why he joined AWS and what his experience has been like so far. You’ll hear more about Paul’s notable career as an early influencer in the evolution of the Internet, how he was inducted into the Internet Hall of Fame, and what he’s doing now as a Deputy CISO and Distinguished Engineer at AWS.
·aws.amazon.com·
Defining the Role of Distinguished Engineers | AWS Executive Insights
Kubernetes 1.30: Read-only volume mounts can be finally literally read-only
Kubernetes 1.30: Read-only volume mounts can be finally literally read-only

Kubernetes 1.30: Read-only volume mounts can be finally literally read-only

https://kubernetes.io/blog/2024/04/23/recursive-read-only-mounts/

Author: Akihiro Suda (NTT)

Read-only volume mounts have been a feature of Kubernetes since the beginning. Surprisingly, read-only mounts are not completely read-only under certain conditions on Linux. As of the v1.30 release, they can be made completely read-only, with alpha support for recursive read-only mounts.

Read-only volume mounts are not really read-only by default

Volume mounts can be deceptively complicated.

You might expect that the following manifest makes everything under /mnt in the containers read-only:

--- apiVersion: v1 kind: Pod spec: volumes:

  • name: mnt hostPath: path: /mnt containers:
  • volumeMounts:
  • name: mnt mountPath: /mnt readOnly: true

However, any sub-mounts beneath /mnt may still be writable! For example, consider that /mnt/my-nfs-server is writeable on the host. Inside the container, writes to /mnt/ will be rejected but /mnt/my-nfs-server/ will still be writeable.

New mount option: recursiveReadOnly

Kubernetes 1.30 added a new mount option recursiveReadOnly so as to make submounts recursively read-only.

The option can be enabled as follows:

--- apiVersion: v1 kind: Pod spec: volumes:

  • name: mnt hostPath: path: /mnt containers:
  • volumeMounts:
  • name: mnt mountPath: /mnt readOnly: true # NEW # Possible values are Enabled, IfPossible, and Disabled. # Needs to be specified in conjunction with readOnly: true. recursiveReadOnly: Enabled

This is implemented by applying the MOUNT_ATTR_RDONLY attribute with the AT_RECURSIVE flag using mount_setattr(2) added in Linux kernel v5.12.

For backwards compatibility, the recursiveReadOnly field is not a replacement for readOnly, but is used in conjunction with it. To get a properly recursive read-only mount, you must set both fields.

Feature availability

To enable recursiveReadOnly mounts, the following components have to be used:

Kubernetes: v1.30 or later, with the RecursiveReadOnlyMounts feature gate enabled. As of v1.30, the gate is marked as alpha.

CRI runtime:

containerd: v2.0 or later

OCI runtime:

runc: v1.1 or later

crun: v1.8.6 or later

Linux kernel: v5.12 or later

What's next?

Kubernetes SIG Node hope - and expect - that the feature will be promoted to beta and eventually general availability (GA) in future releases of Kubernetes, so that users no longer need to enable the feature gate manually.

The default value of recursiveReadOnly will still remain Disabled, for backwards compatibility.

How can I learn more?

Please check out the documentation for the further details of recursiveReadOnly mounts.

How to get involved?

This feature is driven by the SIG Node community. Please join us to connect with the community and share your ideas and feedback around the above feature and beyond. We look forward to hearing from you!

via Kubernetes Blog https://kubernetes.io/

April 22, 2024 at 08:00PM

·kubernetes.io·
Kubernetes 1.30: Read-only volume mounts can be finally literally read-only
Exploring KCL: Configuration and Data Structure Language; CUE and Pkl Replacement?
Exploring KCL: Configuration and Data Structure Language; CUE and Pkl Replacement?

Exploring KCL: Configuration and Data Structure Language; CUE and Pkl Replacement?

Dive into the world of K Configuration Language (KCL).

This review and walkthrough illuminates the features and advantages of using KCL to generate YAML or JSON configurations and manifests. We cover the basics of KCL's syntax, its approach to handling hierarchical data, and demonstrate how to define and manipulate configurations with clarity and precision.

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Hostmane 🔗 https://bit.ly/44ae0gf 🔗 Hostman offers affordable cloud services starting at just $1/month, including free bandwidth. The company’s services are hosted on globally secure, ISO-certified servers located in Tier 3 data centers. Key features include free Firewall, Private Networks, Images, Snapshots, and cost-effective backup solutions starting at $0.07/GB. Additionally, Hostman provides 24/7 rapid tech support and a 7-day trial with a $100 credit for new users. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

KCL #Kubernetes

Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Gist with the commands: https://gist.github.com/vfarcic/e6636bb851ae28d748fc8c1517bac931 🔗 KCL: https://kcl-lang.io 🎬 Is CUE The Perfect Language For Kubernetes Manifests (Helm Templates Replacement)?: https://youtu.be/m6g0aWggdUQ 🎬 Is Timoni With CUE a Helm Replacement?: https://youtu.be/bbE1BFCs548 🎬 Is Pkl the Ultimate Data Format? Unveiling the Challenger to YAML, JSON, and CUE: https://youtu.be/Nm1ioWPRRVQ

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Introduction to KCL 01:03 Hostman (sponsor) 01:42 Introduction to KCL (cont.) 05:41 KCL in Action 14:12 KCL Pros and Cons

via YouTube https://www.youtube.com/watch?v=Gn6btuH3ULw

·youtube.com·
Exploring KCL: Configuration and Data Structure Language; CUE and Pkl Replacement?