Apple blew $10 billion on failed car project, considered buying Tesla
Apple spent roughly $1 billion a year on its car project before canceling it last month, according to a report in Bloomberg.
Tags:
March 08, 2024 at 02:25PM
Apple blew $10 billion on failed car project, considered buying Tesla
Apple spent roughly $1 billion a year on its car project before canceling it last month, according to a report in Bloomberg.
Tags:
March 08, 2024 at 02:25PM
Lightning Round at Security Slam 2023
December 15, 2023, marked a significant day in the world of Kubernetes, as the community came together for a special Lightning Round of the Security Slam.
Tags:
via Pocket https://www.cncf.io/reports/lightning-round-at-security-slam-2023/
March 08, 2024 at 11:14AM
Lf
Linux Foundation - Sponsorship Program Model See main foundation page Kind of nonprofit: lf Sponsor URL: https://raw.githubusercontent.com/jmertic/lf-landscape/main/landscape.yml Levels URL: https://www.linuxfoundation.org/hubfs/lf_member_benefits_122723a.
Tags:
via Pocket https://fossfoundation.info/sponsorships/lf
March 08, 2024 at 10:53AM
Week Ending March 3, 2024
http://lwkd.info/2024/20240307
Developer News
All CI jobs must be on K8s community infra as of yesterday. While the infra team will migrate ones that are simple, other jobs that you don’t help them move may be deleted. Update your jobs now.
Monitoring dashboards for the GKE and EKS build clusters are live. Also, there was an outage in EKS jobs last week.
After a year of work led by Tim Hockin, Go Workspaces support for hacking on Kubernetes is now available, eliminating a lot of GOPATH pain.
It’s time to start working on your SIG Annual Reports, which you should find a lot shorter and easier than previous years. Note that you don’t have to be the SIG Chair to do these, they just have to review them.
Release Schedule
Next Deadline: Test Freeze, March 27th
Code Freeze is now in effect. If your KEP did not get tracked and you want to get your KEP shipped in the 1.30 release, please file an exception as soon as possible.
March Cherry Pick deadline for patch releases is the 8th.
Featured PRs
Selectors in Kubernetes have long been a way to limit large API calls like List and Watch, requesting things with only specific labels or similar. In operators this can be very important to reduce memory usage of shared informer caches, as well as generally keeping apiserver load down. Some core objects extended selectors beyond labels, allowing filtering on other fields such as listing Pods based on spec.nodeName. But this set of fields was limited and could feel random if you didn’t know the specific history of the API (e.g. Pods need a node name filter because it’s the main request made by the kubelet). And it wasn’t available at all to custom type. This PR expands the system, allowing each custom type to declare selector-able fields which will be checked and indexed automatically. The declaration uses JSONPath in a very similar way to the additionalPrinterColumns feature:
selectableFields:
These can then be used in the API just like any other field selector:
c.List(context.Background(), &redThings, client.MatchingFields{ "spec.color": "red", })
As an alpha feature, this is behind the CustomResourceFieldSelectors feature gate.
KEP of the Week
KEP-1610: Container Resource based Autoscaling
For scaling pods based on resource usage, the HPA currently calculates the sum of all the individual container’s resource usage. This is not suitable for workloads where the containers are not related to each other. This KEP proposes that the HPA also provide an option to scale pods based on the resource usages of individual containers in a Pod. The KEP proposes adding a new ContainerResourceMetricSource metric source, with a new Container field, which will be used to identify the container for which the resources should be tracked. When there are multiple containers in a Pod, the individual resource usages of each container can change at different rates. Adding a way to specify the target gives more fine grained control over the scaling.
This KEP is in beta since v1.27 and is planned to graduate to stable in v1.30.
Other Merges
Tunnel kubectl port-forwarding through websockets
Enhanced conflict detection for Service Account and JWT
Create token duration can be zero
Reject empty usernames in OIDC authentication
OpenAPI V2 won’t publish non-matching group-version
New metrics: authorization webhook match conditions, jwt auth latency, watch cache latency
Kubeadm: list nodes needing upgrades, don’t pass duplicate default flags, better upgrade plans, WaitForAllControlPlaneComponents, upgradeConfiguration timeouts, upgradeConfiguration API
Implement strict JWT compact serialization enforcement
Don’t leak discovery documents via the Spec.Service field
Let the container runtime garbage-collect images by tagging them
Client-Go can upgrade subresource fields, and handles cache deletions
Wait for the ProviderID to be available before initializing a node
Don’t panic if nodecondition is nil
Broadcaster logging is now logging level 3
Access mode label for SELinux mounts
AuthorizationConfiguration v1alpha1 is also v1beta1
Kubelet user mapping IDs are configurable
Filter group versions in aggregated API requests
Match condition e2e tests are conformance
Kubelet gets constants from cadvisor
Promotions
PodSchedulingReadiness to GA
ImageMaximumGCAge to Beta
StructuredAuthorizationConfiguration to beta
MinDomainsInPodTopologySpread to beta
RemoteCommand Over Websockets to beta
ContainerCheckpoint to beta
ServiceAccountToken Info to beta
AggregatedDiscovery v2 to GA
PodHostIPs to GA
Version Updates
cadvisor to v0.49.0
kubedns to 1.23.0
Subprojects and Dependency Updates
kubespray to v2.24.1 Set Kubernetes v1.28.6 as the default Kubernetes version.
prometheus to v2.50.1 Fix for broken /metadata API endpoint
via Last Week in Kubernetes Development http://lwkd.info/
March 07, 2024 at 05:00PM
AWS Open Source (@AWSOpen) | Twitter
Hello World! Powering up #OSCON2017 Collaborate with us here on open source projects and releases. Build your #opensource career in #containers at #AWS!
Tags:
via Pocket https://twitter.com/awsopen
March 07, 2024 at 03:34PM
Crossplane Providers and Managed Resources | Tutorial (Part 2)
In this second installment of our Crossplane tutorial series, we dive deeper into the world of Crossplane Providers and Managed ...
via YouTube https://www.youtube.com/watch?v=o53_7vuWjw4
Tech Conference Speaking - Impact and Measures
Speaking at tech conferences is a key part of many Developer Relations and Developer Advocacy strategies - but how can we get better at measuring the impact of these activities? Join for this audio chat where we'll talk about different approaches!
Tags:
via Pocket https://www.linkedin.com/events/techconferencespeaking-impactan7171264567392550913/
March 07, 2024 at 10:05AM
CRI-O: Applying seccomp profiles from OCI registries
https://kubernetes.io/blog/2024/03/07/cri-o-seccomp-oci-artifacts/
Author: Sascha Grunert
Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a node to your Pods and containers.
But distributing those seccomp profiles is a major challenge in Kubernetes, because the JSON files have to be available on all nodes where a workload can possibly run. Projects like the Security Profiles Operator solve that problem by running as a daemon within the cluster, which makes me wonder which part of that distribution could be done by the container runtime.
Runtimes usually apply the profiles from a local path, for example:
apiVersion: v1 kind: Pod metadata: name: pod spec: containers:
The profile nginx-1.25.3.json has to be available in the root directory of the kubelet, appended by the seccomp directory. This means the default location for the profile on-disk would be /var/lib/kubelet/seccomp/nginx-1.25.3.json. If the profile is not available, then runtimes will fail on container creation like this:
kubectl get pods
NAME READY STATUS RESTARTS AGE pod 0/1 CreateContainerError 0 38s
kubectl describe pod/pod | tail
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 117s default-scheduler Successfully assigned default/pod to 127.0.0.1 Normal Pulling 117s kubelet Pulling image "nginx:1.25.3" Normal Pulled 111s kubelet Successfully pulled image "nginx:1.25.3" in 5.948s (5.948s including waiting) Warning Failed 7s (x10 over 111s) kubelet Error: setup seccomp: unable to load local profile "/var/lib/kubelet/seccomp/nginx-1.25.3.json": open /var/lib/kubelet/seccomp/nginx-1.25.3.json: no such file or directory Normal Pulled 7s (x9 over 111s) kubelet Container image "nginx:1.25.3" already present on machine
The major obstacle of having to manually distribute the Localhost profiles will lead many end-users to fall back to RuntimeDefault or even running their workloads as Unconfined (with disabled seccomp).
CRI-O to the rescue
The Kubernetes container runtime CRI-O provides various features using custom annotations. The v1.30 release adds support for a new set of annotations called seccomp-profile.kubernetes.cri-o.io/POD and seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. Those annotations allow you to specify:
a seccomp profile for a specific container, when used as: seccomp-profile.kubernetes.cri-o.io/<CONTAINER> (example: seccomp-profile.kubernetes.cri-o.io/webserver: 'registry.example/example/webserver:v1')
a seccomp profile for every container within a pod, when used without the container name suffix but the reserved name POD: seccomp-profile.kubernetes.cri-o.io/POD
a seccomp profile for a whole container image, if the image itself contains the annotation seccomp-profile.kubernetes.cri-o.io/POD or seccomp-profile.kubernetes.cri-o.io/<CONTAINER>.
CRI-O will only respect the annotation if the runtime is configured to allow it, as well as for workloads running as Unconfined. All other workloads will still use the value from the securityContext with a higher priority.
The annotations alone will not help much with the distribution of the profiles, but the way they can be referenced will! For example, you can now specify seccomp profiles like regular container images by using OCI artifacts:
apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: …
The image quay.io/crio/seccomp:v2 contains a seccomp.json file, which contains the actual profile content. Tools like ORAS or Skopeo can be used to inspect the contents of the image:
oras pull quay.io/crio/seccomp:v2
Downloading 92d8ebfa89aa seccomp.json Downloaded 92d8ebfa89aa seccomp.json Pulled [registry] quay.io/crio/seccomp:v2 Digest: sha256:f0205dac8a24394d9ddf4e48c7ac201ca7dcfea4c554f7ca27777a7f8c43ec1b
jq . seccomp.json | head
{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "defaultErrno": "ENOSYS", "archMap": [ { "architecture": "SCMP_ARCH_X86_64", "subArchitectures": [ "SCMP_ARCH_X86", "SCMP_ARCH_X32"
skopeo inspect --raw docker://quay.io/crio/seccomp:v2 | jq .
{ "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.cncf.seccomp-profile.config.v1+json", "digest": "sha256:ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356", "size": 3, }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:92d8ebfa89aa6dd752c6443c27e412df1b568d62b4af129494d7364802b2d476", "size": 18853, "annotations": { "org.opencontainers.image.title": "seccomp.json" }, }, ], "annotations": { "org.opencontainers.image.created": "2024-02-26T09:03:30Z" }, }
The image manifest contains a reference to a specific required config media type (application/vnd.cncf.seccomp-profile.config.v1+json) and a single layer (application/vnd.oci.image.layer.v1.tar) pointing to the seccomp.json file. But now, let's give that new feature a try!
Using the annotation for a specific container or whole pod
CRI-O needs to be configured adequately before it can utilize the annotation. To do this, add the annotation to the allowed_annotations array for the runtime. This can be done by using a drop-in configuration /etc/crio/crio.conf.d/10-crun.conf like this:
[crio.runtime] default_runtime = "crun"
[crio.runtime.runtimes.crun] allowed_annotations = [ "seccomp-profile.kubernetes.cri-o.io", ]
Now, let's run CRI-O from the latest main commit. This can be done by either building it from source, using the static binary bundles or the prerelease packages.
To demonstrate this, I ran the crio binary from my command line using a single node Kubernetes cluster via local-up-cluster.sh. Now that the cluster is up and running, let's try a pod without the annotation running as seccomp Unconfined:
cat pod.yaml
apiVersion: v1 kind: Pod metadata: name: pod spec: containers:
kubectl apply -f pod.yaml
The workload is up and running:
kubectl get pods
NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 15s
And no seccomp profile got applied if I inspect the container using crictl:
export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp
null
Now, let's modify the pod to apply the profile quay.io/crio/seccomp:v2 to the container:
apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/container: quay.io/crio/seccomp:v2 spec: containers:
I have to delete and recreate the Pod, because only recreation will apply a new seccomp profile:
kubectl delete pod/pod
pod "pod" deleted
kubectl apply -f pod.yaml
pod/pod created
The CRI-O logs will now indicate that the runtime pulled the artifact:
WARN[…] Allowed annotations are specified for workload [seccomp-profile.kubernetes.cri-o.io] INFO[…] Found container specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io/container=quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer
And the container is finally using the profile:
export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head
{ "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ {
The same would work for every container in the pod, if users replace the /container suffix with the reserved name /POD, for example:
apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: containers:
Using the annotation for a container image
While specifying seccomp profiles as OCI artifacts on certain workloads is a cool feature, the majority of end users would like to link seccomp profiles to published container images. This can be done by using a container image annotation; instead of being applied to a Kubernetes Pod, the annotation is some metadata applied at the container image itself. For example, Podman can be used to add the image annotation directly during image build:
podman build \ --annotation seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 \ -t quay.io/crio/nginx-seccomp:v2 .
The pushed image then contains the annotation:
skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2 |
jq '.annotations."seccomp-profile.kubernetes.cri-o.io"'
"quay.io/crio/seccomp:v2"
If I now use that image in an CRI-O test pod definition:
apiVersion: v1 kind: Pod metadata: name: pod
spec: containers:
Then the CRI-O logs will indicate that the image annotation got evaluated and the profile got applied:
kubectl delete pod/pod
pod "pod" deleted
kubectl apply -f pod.yaml
po
Misfits - Feat. ContainerSSH and Confidential Containers (You Choose!, Ch. 3, Ep. 10)
Misfits - Choose Your Own Adventure: The Treacherous Trek to Security In this episode, we'll go through security-related tools that ...
via YouTube https://www.youtube.com/watch?v=AKmDmhd5hpQ