Suggested Reads

Suggested Reads

54940 bookmarks
Newest
It’s amazing to me how Republicans reflect themselves in their accusations. Just because YOU would do it doesn’t mean a more responsible adult would do it in the same position. | No directive: FBI agents, tech executives deny government ordered Twitter to suppress Hunter Biden story | CNN Politics
It’s amazing to me how Republicans reflect themselves in their accusations. Just because YOU would do it doesn’t mean a more responsible adult would do it in the same position. | No directive: FBI agents, tech executives deny government ordered Twitter to suppress Hunter Biden story | CNN Politics
Internal Twitter communications released by the company's new owner and CEO, Elon Musk, are fueling intense scrutiny of the FBI's efforts alongside social media companies to thwart foreign disinformation in the run-up to the 2020 election.
·cnn.com·
It’s amazing to me how Republicans reflect themselves in their accusations. Just because YOU would do it doesn’t mean a more responsible adult would do it in the same position. | No directive: FBI agents, tech executives deny government ordered Twitter to suppress Hunter Biden story | CNN Politics
Blog: Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time
Blog: Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time
Authors: Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat) Delegation of fsGroup to CSI drivers was first introduced as alpha in Kubernetes 1.22, and graduated to beta in Kubernetes 1.25. For Kubernetes 1.26, we are happy to announce that this feature has graduated to General Availability (GA). In this release, if you specify a fsGroup in the security context , for a (Linux) Pod, all processes in the pod's containers are part of the additional group that you specified. In previous Kubernetes releases, the kubelet would always apply the fsGroup ownership and permission changes to files in the volume according to the policy you specified in the Pod's .spec.securityContext.fsGroupChangePolicy field. Starting with Kubernetes 1.26, CSI drivers have the option to apply the fsGroup settings during volume mount time, which frees the kubelet from changing the permissions of files and directories in those volumes. How does it work? CSI drivers that support this feature should advertise the VOLUME_MOUNT_GROUP node capability. After recognizing this information, the kubelet passes the fsGroup information to the CSI driver during pod startup. This is done through the NodeStageVolumeRequest and NodePublishVolumeRequest CSI calls. Consequently, the CSI driver is expected to apply the fsGroup to the files in the volume using a mount option . As an example, Azure File CSIDriver utilizes the gid mount option to map the fsGroup information to all the files in the volume. It should be noted that in the example above the kubelet refrains from directly applying the permission changes into the files and directories in that volume files. Additionally, two policy definitions no longer have an effect: neither .spec.fsGroupPolicy for the CSIDriver object, nor .spec.securityContext.fsGroupChangePolicy for the Pod. For more details about the inner workings of this feature, check out the enhancement proposal and the CSI Driver fsGroup Support in the CSI developer documentation. Why is it important? Without this feature, applying the fsGroup information to files is not possible in certain storage environments. For instance, Azure File does not support a concept of POSIX-style ownership and permissions of files. The CSI driver is only able to set the file permissions at the volume level. How do I use it? This feature should be mostly transparent to users. If you maintain a CSI driver that should support this feature, read CSI Driver fsGroup Support for more information on how to support this feature in your CSI driver. Existing CSI drivers that do not support this feature will continue to work as usual: they will not receive any fsGroup information from the kubelet. In addition to that, the kubelet will continue to perform the ownership and permissions changes to files for those volumes, according to the policies specified in .spec.fsGroupPolicy for the CSIDriver and .spec.securityContext.fsGroupChangePolicy for the relevant Pod.
·kubernetes.io·
Blog: Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time
The pitfalls of blocking IP addresses
The pitfalls of blocking IP addresses
Using IP blocks to make domains unreachable is a far-reaching method that has undesirable side effects because there is no one-on-one relationship.
·malwarebytes.com·
The pitfalls of blocking IP addresses
Lockdownyourlife (@Lockdownyourlife@infosec.exchange)
Lockdownyourlife (@Lockdownyourlife@infosec.exchange)
A friendly reminder to set-up Google alerts for yourself, your family, your business, your trademarks, so that in case of a data breach, harassment, stalking, NCP, or even rumors/crisis, you get notification & can handle accordingly. #googlealerts #google #reminder #MorningMeditation #infosec #techie #safety #InformationSecurity #privacy #business #coffee
·infosec.exchange·
Lockdownyourlife (@Lockdownyourlife@infosec.exchange)
Blog: Kubernetes v1.26: GA Support for Kubelet Credential Providers
Blog: Kubernetes v1.26: GA Support for Kubelet Credential Providers
Authors: Andrew Sy Kim (Google), Dixita Narang (Google) Kubernetes v1.26 introduced generally available (GA) support for kubelet credential provider plugins , offering an extensible plugin framework to dynamically fetch credentials for any container image registry. Background Kubernetes supports the ability to dynamically fetch credentials for a container registry service. Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry. Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry. Kubernetes v1.20 introduced alpha support for kubelet credential providers plugins, which provides a mechanism for the kubelet to dynamically authenticate and pull images for arbitrary container registries - whether these are public registries, managed services, or even a self-hosted registry. In Kubernetes v1.26, this feature is now GA Figure 2: Kubelet credential provider overview Why is it important? Prior to Kubernetes v1.20, if you wanted to dynamically fetch credentials for image registries other than ACR (Azure Container Registry), ECR (Elastic Container Registry), or GCR (Google Container Registry), you needed to modify the kubelet code. The new plugin mechanism can be used in any cluster, and lets you authenticate to new registries without any changes to Kubernetes itself. Any cloud provider or vendor can publish a plugin that lets you authenticate with their image registry. How it works The kubelet and the exec plugin binary communicate through stdio (stdin, stdout, and stderr) by sending and receiving json-serialized api-versioned types. If the exec plugin is enabled and the kubelet requires authentication information for an image that matches against a plugin, the kubelet will execute the plugin binary, passing the CredentialProviderRequest API via stdin. Then the exec plugin communicates with the container registry to dynamically fetch the credentials and returns the credentials in an encoded response of the CredentialProviderResponse API to the kubelet via stdout. Figure 3: Kubelet credential provider plugin flow On receiving credentials from the kubelet, the plugin can also indicate how long credentials can be cached for, to prevent unnecessary execution of the plugin by the kubelet for subsequent image pull requests to the same registry. In cases where the cache duration is not specified by the plugin, a default cache duration can be specified by the kubelet (more details below). { "apiVersion" : "kubelet.k8s.io/v1" , "kind" : "CredentialProviderResponse" , "auth" : { "cacheDuration" : "6h" , "private-registry.io/my-app" : { "username" : "exampleuser" , "password" : "token12345" } } } In addition, the plugin can specify the scope in which cached credentials are valid for. This is specified through the cacheKeyType field in CredentialProviderResponse . When the value is Image , the kubelet will only use cached credentials for future image pulls that exactly match the image of the first request. When the value is Registry , the kubelet will use cached credentials for any subsequent image pulls destined for the same registry host but using different paths (for example, gcr.io/foo/bar and gcr.io/bar/foo refer to different images from the same registry). Lastly, when the value is Global , the kubelet will use returned credentials for all images that match against the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The cacheKeyType field is required by plugin implementations. { "apiVersion" : "kubelet.k8s.io/v1" , "kind" : "CredentialProviderResponse" , "auth" : { "cacheKeyType" : "Registry" , "private-registry.io/my-app" : { "username" : "exampleuser" , "password" : "token12345" } } } Using kubelet credential providers You can configure credential providers by installing the exec plugin(s) into a local directory accessible by the kubelet on every node. Then you set two command line arguments for the kubelet: --image-credential-provider-config : the path to the credential provider plugin config file. --image-credential-provider-bin-dir : the path to the directory where credential provider plugin binaries are located. The configuration file passed into --image-credential-provider-config is read by the kubelet to determine which exec plugins should be invoked for a container image used by a Pod. Note that the name of each provider must match the name of the binary located in the local directory specified in --image-credential-provider-bin-dir , otherwise the kubelet cannot locate the path of the plugin to invoke. kind : CredentialProviderConfig apiVersion : kubelet.config.k8s.io/v1 providers : - name : auth-provider-gcp apiVersion : credentialprovider.kubelet.k8s.io/v1 matchImages : - "container.cloud.google.com" - "gcr.io" - "*.gcr.io" - "*.pkg.dev" args : - get-credentials - --v=3 defaultCacheDuration : 1m Below is an overview of how the Kubernetes project is using kubelet credential providers for end-to-end testing. Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing For more configuration details, see Kubelet Credential Providers . Getting Involved Come join SIG Node if you want to report bugs or have feature requests for the Kubelet Credential Provider. You can reach us through the following ways: Slack: #sig-node Mailing list Open Community Issues/PRs Biweekly meetings
·kubernetes.io·
Blog: Kubernetes v1.26: GA Support for Kubelet Credential Providers
How to Monitor and Fix PostgreSQL Database Locks in Rails
How to Monitor and Fix PostgreSQL Database Locks in Rails
PostgreSQL database locks usually work seamlessly until they don't. Before your Rails app dataset and traffic reach a certain scale, you're unlikely to face any locks-related issues. But if your app suddenly slows down to a crawl, deadlocks likely are to blame. In this blog post, I'll describe how to monitor database locks and propose best practices to prevent them from causing issues.
·pawelurbanek.com·
How to Monitor and Fix PostgreSQL Database Locks in Rails
Okta's source code stolen after GitHub repositories hacked
Okta's source code stolen after GitHub repositories hacked
In a 'confidential' email notification sent by Okta and seen by BleepingComputer, the company states that attackers gained access to its GitHub repositories this month and stole the company's source code.
·bleepingcomputer.com·
Okta's source code stolen after GitHub repositories hacked
The Architecture of Mastodon
The Architecture of Mastodon
I was curious about how Mastodon is actually implemented. A high-level overview: * Web app is written in Ruby on Rails * Persistence layer is PostgreSQL * File storage/mailer is S3/SMTP agnostic * Redis for precomputed social feed cache and worker queue * Sidekiq for background processing * A node handler for streaming (notifications, new posts, etc.) Looking at the code, it's definitely an interesting look into the early 2010s style of development. There's nothing wrong with it, and in
·matt-rickard.com·
The Architecture of Mastodon