1_r/devopsish

1_r/devopsish

54498 bookmarks
Custom sorting
Npm overflowing with Tea spam spills out from 70% of all new packages research DEVCLASS
Npm overflowing with Tea spam spills out from 70% of all new packages research DEVCLASS

Npm overflowing with Tea spam, spills out from 70% of all new packages – research • DEVCLASS

Over the past six months more than 890,000 new packages (as opposed to updates for existing packages) were published on npm, of which between 613,000 and…

August 8, 2024 at 01:23PM

via Instapaper

·devclass.com·
Npm overflowing with Tea spam spills out from 70% of all new packages research DEVCLASS
Backblaze Drive Stats for Q2 2024
Backblaze Drive Stats for Q2 2024
Read the Q2 2024 Drive Stats Report, with the latest on annualized failure rates and a look into measuring drive consistency over time.
·backblaze.com·
Backblaze Drive Stats for Q2 2024
IEEE report: When it comes to SSD and HDD tech roadmaps, money talks – Blocks and Files
IEEE report: When it comes to SSD and HDD tech roadmaps, money talks – Blocks and Files
An IEEE report sees no mass takeover of the disk drive market by SSDs because HDD cost/bit is decreasing fast enough to prevent SSDs catching up. The International Roadmap for Devices and Systems 2023 Update (IRDS) is discussed in Tom Coughlin’s August Digital Storage Technology Newsletter (subscription details here.) It covers many mass storage technologies: […]
·blocksandfiles.com·
IEEE report: When it comes to SSD and HDD tech roadmaps, money talks – Blocks and Files
Just Build Websites
Just Build Websites
Writing about the big beautiful mess that is making things for the world wide web.
·blog.jim-nielsen.com·
Just Build Websites
Criminal cyber attack behind McLaren Health Network IT collapse
Criminal cyber attack behind McLaren Health Network IT collapse
According to McLaren Northern Michigan's Facebook page, the disruption to their information technology and phone systems reported on Tuesday resulted from a criminal cyber attack.
·clickondetroit.com·
Criminal cyber attack behind McLaren Health Network IT collapse
Last Week in Kubernetes Development - Week Ending August 04 2024
Last Week in Kubernetes Development - Week Ending August 04 2024

Week Ending August 04, 2024

https://lwkd.info/2024/20240807

Developer News

The Steering Committee Election has begun! Now is the time to decide if you should be on Steering. If so, nominate yourself. Also, it’s time to check if you’re eligible to vote, since there’s plenty of time to fix that if the system doesn’t have you yet.

The Job Migration deadline is past and umigrated jobs are being removed. The Infra team plans to migrate Prow around August 21st (after the 10.31 release), during which there will be a Prow outage. Infra recommends that Kubernetes subprojects not make any releases around the end of August.

The GitHub Admins are migrating all of our GitHub Orgs to be under a single Enterprise account. This is not expected to be disruptive, but they’ll do it one org at a time just in case.

SIG Node nominates Peter Hunt to be the third chair of the SIG.

Release Schedule

Next Deadline: v1.31.0 release day, August 13th

Kubernetes v1.31.0-rc.1 is now live! Find the full changelog with the release notes here. We’re all set for the release, which is scheduled for next Tuesday.

August patch releases, originally scheduled for 13th August, are now delayed by one day because of v1.31 release scheduled for the same day. The patch releases are now scheduled to be cut on the 14th. There are no changes to the cherry pick deadline.

KEP of the Week

KEP 3762: PersistentVolume last phase transition time

This KEP introduces an enhancement to add a new PersistentVolumeStatus field, which would hold a timestamp of when a PersistentVolume last transitioned to a different phase.

Some users have experienced data loss with the Delete retain policy and switched to the safer Retain policy, where unclaimed volumes transition to the Released phase and accumulate over time, requiring manual cleanup. This will aid in manual cleanups, and allow performance tests to measure transition times (e.g., Pending to Bound).

This KEP is tracked for GA release in the upcoming v1.31.

Subprojects and Dependency Updates

CRI-O v1.30.4 reduced “Failed to get pid for pod infra container” NRI message

coredns v1.11.3 rewrite plugin can now rewrite response codes

csi-driver-host-path v1.14.1 Fix broken symbolic links in the deploy dir

grpc v1.65.4 Fix a bug in hpack error handling; also v1.58.3, v1.64.3, v1.63.2, v1.62.3, v1.61.3

cloud-provider-aws v1.30.3 Handle error while registering/deregistering target during; also v1.29.6, v1.28.9

via Last Week in Kubernetes Development https://lwkd.info/

August 07, 2024 at 02:30PM

·lwkd.info·
Last Week in Kubernetes Development - Week Ending August 04 2024
How the Google Antitrust Ruling May Influence Tech Competition
How the Google Antitrust Ruling May Influence Tech Competition

How the Google Antitrust Ruling May Influence Tech Competition

News Analysis How the Google Antitrust Ruling May Influence Tech Competition Nearly a quarter-century after Microsoft lost a similar case, a judge’s decision…

August 7, 2024 at 07:43AM

via Instapaper

·nytimes.com·
How the Google Antitrust Ruling May Influence Tech Competition
Spotlight on SIG API Machinery
Spotlight on SIG API Machinery

Spotlight on SIG API Machinery

https://kubernetes.io/blog/2024/08/07/sig-api-machinery-spotlight-2024/

We recently talked with Federico Bongiovanni (Google) and David Eads (Red Hat), Chairs of SIG API Machinery, to know a bit more about this Kubernetes Special Interest Group.

Introductions

Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about yourselves and how you got involved in Kubernetes?

David: I started working on OpenShift (the Red Hat distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My first PRs were fixing kube-apiserver error messages and from there I branched out to kubectl (kubeconfigs are my fault!), auth (RBAC and *Review APIs are ports from OpenShift), apps (workqueues and sharedinformers for example). Don’t tell the others, but API Machinery is still my favorite :)

Federico: I was not as early in Kubernetes as David, but now it's been more than six years. At my previous company we were starting to use Kubernetes for our own products, and when I came across the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun intended). I joined Google and Kubernetes in early 2018, and have been involved since.

SIG Machinery's scope

FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in your own words?

David: We own the kube-apiserver and how to efficiently use it. On the backend, that includes its contract with backend storage and how it allows API schema evolution over time. On the frontend, that includes schema best practices, serialization, client patterns, and controller patterns on top of all of it.

Federico: Kubernetes has a lot of different components, but the control plane has a really critical mission: it's your communication layer with the cluster and also owns all the extensibility mechanisms that make Kubernetes so powerful. We can't make mistakes like a regression, or an incompatible change, because the blast radius is huge.

FSM: Given this breadth, how do you manage the different aspects of it?

Federico: We try to organize the large amount of work into smaller areas. The working groups and subprojects are part of it. Different people on the SIG have their own areas of expertise, and if everything fails, we are really lucky to have people like David, Joe, and Stefan who really are "all terrain", in a way that keeps impressing me even after all these years. But on the other hand this is the reason why we need more people to help us carry the quality and excellence of Kubernetes from release to release.

An evolving collaboration model

FSM: Was the existing model always like this, or did it evolve with time - and if so, what would you consider the main changes and the reason behind them?

David: API Machinery has evolved over time both growing and contracting in scope. When trying to satisfy client access patterns it’s very easy to add scope both in terms of features and applying them.

A good example of growing scope is the way that we identified a need to reduce memory utilization by clients writing controllers and developed shared informers. In developing shared informers and the controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory utilization and eliminated many expensive lists. The downside: we grew a new set of capability to support and effectively took ownership of that area from sig-apps.

For an example of more shared ownership: building out cooperative resource management (the goal of server-side apply), kubectl expanded to take ownership of leveraging the server-side apply capability. The transition isn’t yet complete, but SIG CLI manages that usage and owns it.

FSM: And for the boundary between approaches, do you have any guidelines?

David: I think much depends on the impact. If the impact is local in immediate effect, we advise other SIGs and let them move at their own pace. If the impact is global in immediate effect without a natural incentive, we’ve found a need to press for adoption directly.

FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly independent from SIG API Machinery or are there important connection points?

David: The projects have similar sounding names and carry some impacts on each other, but have different missions and scopes. API Machinery owns the how and API Governance owns the what. API conventions, the API approval process, and the final say on individual k8s.io APIs belong to API Governance. API Machinery owns the REST semantics and non-API specific behaviors.

Federico: I really like how David put it: "API Machinery owns the how and API Governance owns the what": we don't own the actual APIs, but the actual APIs live through us.

The challenges of Kubernetes popularity

FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the Control Plane: how is this felt and how does it influence the work of the SIG?

David: It’s had a massive influence on API Machinery. Over the years we have often responded to and many times enabled the evolutionary stages of Kubernetes. As the central orchestration hub of nearly all capability on Kubernetes clusters, we both lead and follow the community. In broad strokes I see a few evolution stages for API Machinery over the years, with constantly high activity.

Finding purpose: pre-1.0 up until v1.3 (up to our first 1000+ nodes/namespaces) or so. This time was characterized by rapid change. We went through five different versions of our schemas and rose to meet the need. We optimized for quick, in-tree API evolution (sometimes to the detriment of longer term goals), and defined patterns for the first time.

Scaling to meet the need: v1.3-1.9 (up to shared informers in controllers) or so. When we started trying to meet customer needs as we gained adoption, we found severe scale limitations in terms of CPU and memory. This was where we broadened API machinery to include access patterns, but were still heavily focused on in-tree types. We built the watch cache, protobuf serialization, and shared caches.

Fostering the ecosystem: v1.8-1.21 (up to CRD v1) or so. This was when we designed and wrote CRDs (the considered replacement for third-party-resources), the immediate needs we knew were coming (admission webhooks), and evolution to best practices we knew we needed (API schemas). This enabled an explosion of early adopters willing to work very carefully within the constraints to enable their use-cases for servicing pods. The adoption was very fast, sometimes outpacing our capability, and creating new problems.

Simplifying deployments: v1.22+. In the relatively recent past, we’ve been responding to pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem projects using our extensions mechanisms. Lots of effort is now going into making platform extensions easier to write and safer to manage by people who don't hold PhDs in kubernetes. This started with things like server-side-apply and continues today with features like webhook match conditions and validating admission policies.

Work in API Machinery has a broad impact across the project and the ecosystem. It’s an exciting area to work for those able to make a significant time investment on a long time horizon.

The road ahead

FSM: With those different evolutionary stages in mind, what would you pinpoint as the top priorities for the SIG at this time?

David: Reliability, efficiency, and capability in roughly that order.

With the increased usage of our kube-apiserver and extensions mechanisms, we find that our first set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks in terms of potential mis-use with large blast radius. To mitigate these risks, we’re investing in features that reduce the blast radius for accidents (webhook match conditions) and which provide alternative mechanisms with lower risk profiles for most actions (validating admission policy).

At the same time, the increased usage has made us more aware of scaling limitations that we can improve both server and client-side. Efforts here include more efficient serialization (CBOR), reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).

And finally, the increased usage has highlighted some long existing gaps that we’re closing. Things like field selectors for CRDs which the Batch Working Group is eager to leverage and will eventually form the basis for a new way to prevent trampoline pod attacks from exploited nodes.

Joining the fun

FSM: For anyone wanting to start contributing, what's your suggestions?

Federico: SIG API Machinery is not an exception to the Kubernetes motto: Chop Wood and Carry Water. There are multiple weekly meetings that are open to everybody, and there is always more work to be done than people to do it.

I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high, because of the reasons we've been discussing: we carry a huge responsibility. But of course with passion and perseverance many people has ramped up through the years, and we hope more will come.

In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to attend and listen, see what the group talks about, see what's going on in this release, etc.

Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through everything new from the last meeting. We've been keeping this practice for more than 7 years now. It's a great opportunity to volunteer to review code, fix bugs, improve documentation, etc. Tuesday's it's at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST). We are always looking to improv

·kubernetes.io·
Spotlight on SIG API Machinery
Blog: Spotlight on SIG API Machinery
Blog: Spotlight on SIG API Machinery

Blog: Spotlight on SIG API Machinery

https://www.kubernetes.dev/blog/2024/08/07/sig-api-machinery-spotlight-2024/

We recently talked with Federico Bongiovanni (Google) and David Eads (Red Hat), Chairs of SIG API Machinery, to know a bit more about this Kubernetes Special Interest Group.

Introductions

Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about yourselves and how you got involved in Kubernetes?

David: I started working on OpenShift (the Red Hat distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My first PRs were fixing kube-apiserver error messages and from there I branched out to kubectl (kubeconfigs are my fault!), auth (RBAC and *Review APIs are ports from OpenShift), apps (workqueues and sharedinformers for example). Don’t tell the others, but API Machinery is still my favorite :)

Federico: I was not as early in Kubernetes as David, but now it’s been more than six years. At my previous company we were starting to use Kubernetes for our own products, and when I came across the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun intended). I joined Google and Kubernetes in early 2018, and have been involved since.

SIG Machinery’s scope

FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in your own words?

David: We own the kube-apiserver and how to efficiently use it. On the backend, that includes its contract with backend storage and how it allows API schema evolution over time. On the frontend, that includes schema best practices, serialization, client patterns, and controller patterns on top of all of it.

Federico: Kubernetes has a lot of different components, but the control plane has a really critical mission: it’s your communication layer with the cluster and also owns all the extensibility mechanisms that make Kubernetes so powerful. We can’t make mistakes like a regression, or an incompatible change, because the blast radius is huge.

FSM: Given this breadth, how do you manage the different aspects of it?

Federico: We try to organize the large amount of work into smaller areas. The working groups and subprojects are part of it. Different people on the SIG have their own areas of expertise, and if everything fails, we are really lucky to have people like David, Joe, and Stefan who really are “all terrain”, in a way that keeps impressing me even after all these years. But on the other hand this is the reason why we need more people to help us carry the quality and excellence of Kubernetes from release to release.

An evolving collaboration model

FSM: Was the existing model always like this, or did it evolve with time - and if so, what would you consider the main changes and the reason behind them?

David: API Machinery has evolved over time both growing and contracting in scope. When trying to satisfy client access patterns it’s very easy to add scope both in terms of features and applying them.

A good example of growing scope is the way that we identified a need to reduce memory utilization by clients writing controllers and developed shared informers. In developing shared informers and the controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory utilization and eliminated many expensive lists. The downside: we grew a new set of capability to support and effectively took ownership of that area from sig-apps.

For an example of more shared ownership: building out cooperative resource management (the goal of server-side apply), kubectl expanded to take ownership of leveraging the server-side apply capability. The transition isn’t yet complete, but SIG CLI manages that usage and owns it.

FSM: And for the boundary between approaches, do you have any guidelines?

David: I think much depends on the impact. If the impact is local in immediate effect, we advise other SIGs and let them move at their own pace. If the impact is global in immediate effect without a natural incentive, we’ve found a need to press for adoption directly.

FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly independent from SIG API Machinery or are there important connection points?

David: The projects have similar sounding names and carry some impacts on each other, but have different missions and scopes. API Machinery owns the how and API Governance owns the what. API conventions, the API approval process, and the final say on individual k8s.io APIs belong to API Governance. API Machinery owns the REST semantics and non-API specific behaviors.

Federico: I really like how David put it: “API Machinery owns the how and API Governance owns the what”: we don’t own the actual APIs, but the actual APIs live through us.

The challenges of Kubernetes popularity

FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the Control Plane: how is this felt and how does it influence the work of the SIG?

David: It’s had a massive influence on API Machinery. Over the years we have often responded to and many times enabled the evolutionary stages of Kubernetes. As the central orchestration hub of nearly all capability on Kubernetes clusters, we both lead and follow the community. In broad strokes I see a few evolution stages for API Machinery over the years, with constantly high activity.

Finding purpose: pre-1.0 up until v1.3 (up to our first 1000+ nodes/namespaces) or so. This time was characterized by rapid change. We went through five different versions of our schemas and rose to meet the need. We optimized for quick, in-tree API evolution (sometimes to the detriment of longer term goals), and defined patterns for the first time.

Scaling to meet the need: v1.3-1.9 (up to shared informers in controllers) or so. When we started trying to meet customer needs as we gained adoption, we found severe scale limitations in terms of CPU and memory. This was where we broadened API machinery to include access patterns, but were still heavily focused on in-tree types. We built the watch cache, protobuf serialization, and shared caches.

Fostering the ecosystem: v1.8-1.21 (up to CRD v1) or so. This was when we designed and wrote CRDs (the considered replacement for third-party-resources), the immediate needs we knew were coming (admission webhooks), and evolution to best practices we knew we needed (API schemas). This enabled an explosion of early adopters willing to work very carefully within the constraints to enable their use-cases for servicing pods. The adoption was very fast, sometimes outpacing our capability, and creating new problems.

Simplifying deployments: v1.22+. In the relatively recent past, we’ve been responding to pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem projects using our extensions mechanisms. Lots of effort is now going into making platform extensions easier to write and safer to manage by people who don’t hold PhDs in kubernetes. This started with things like server-side-apply and continues today with features like webhook match conditions and validating admission policies.

Work in API Machinery has a broad impact across the project and the ecosystem. It’s an exciting area to work for those able to make a significant time investment on a long time horizon.

The road ahead

FSM: With those different evolutionary stages in mind, what would you pinpoint as the top priorities for the SIG at this time?

David: Reliability, efficiency, and capability in roughly that order.

With the increased usage of our kube-apiserver and extensions mechanisms, we find that our first set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks in terms of potential mis-use with large blast radius. To mitigate these risks, we’re investing in features that reduce the blast radius for accidents (webhook match conditions) and which provide alternative mechanisms with lower risk profiles for most actions (validating admission policy).

At the same time, the increased usage has made us more aware of scaling limitations that we can improve both server and client-side. Efforts here include more efficient serialization (CBOR), reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).

And finally, the increased usage has highlighted some long existing gaps that we’re closing. Things like field selectors for CRDs which the Batch Working Group is eager to leverage and will eventually form the basis for a new way to prevent trampoline pod attacks from exploited nodes.

Joining the fun

FSM: For anyone wanting to start contributing, what’s your suggestions?

Federico: SIG API Machinery is not an exception to the Kubernetes motto: Chop Wood and Carry Water. There are multiple weekly meetings that are open to everybody, and there is always more work to be done than people to do it.

I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high, because of the reasons we’ve been discussing: we carry a huge responsibility. But of course with passion and perseverance many people has ramped up through the years, and we hope more will come.

In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to attend and listen, see what the group talks about, see what’s going on in this release, etc.

Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through everything new from the last meeting. We’ve been keeping this practice for more than 7 years now. It’s a great opportunity to volunteer to review code, fix bugs, improve documentation, etc. Tuesday’s it’s at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST). We are always lookin

·kubernetes.dev·
Blog: Spotlight on SIG API Machinery
DevOps Toolkit - 10 CLIs I Can Not Live Without! - https://www.youtube.com/watch?v=7ItANF7eytU
DevOps Toolkit - 10 CLIs I Can Not Live Without! - https://www.youtube.com/watch?v=7ItANF7eytU

10 CLIs I Can Not Live Without!

Discover the top 10 CLI tools that will improve your productivity! From eza for enhanced file listing to bat for syntax-highlighted file viewing, and fzf for fuzzy file searching, these tools are game-changers. Learn how zoxide makes directory navigation a breeze, and how The Fuck corrects command errors effortlessly. Explore jq and yq for JSON and YAML processing, Teller for managing secrets, and the powerful GitHub CLI. Finally, see how Devbox creates isolated environments for seamless development.

CLItools #ProductivityHacks #CommandLine #DevTools

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/terminal/transform-your-terminal-3-must-have-zsh-plugins 🔗 eza: https://github.com/eza-community/eza 🔗 bat: https://github.com/sharkdp/bat 🔗 fzf: https://github.com/junegunn/fzf 🔗 zoxide: https://github.com/ajeetdsouza/zoxide 🔗 The Fuck: https://github.com/nvbn/thefuck 🔗 jq: https://jqlang.github.io/jq 🔗 yq: https://github.com/mikefarah/yq 🔗 Teller: https://github.com/tellerops/teller 🔗 GitHub CLI: https://cli.github.com 🔗 Devbox: https://www.jetify.com/devbox

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendar.app.google/Q9eaDUHN8ibWBaA7A to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ Twitter: https://twitter.com/vfarcic ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/

▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Top CLIs: Introduction 00:32 eza: ls Replacement 02:53 bat: cat With Syntaz Highlighting 04:10 fzf: Command-Line Fuzzy Finder 05:49 zoxide: Smarter cd Command 07:58 The Fuck: Error Corrections 09:04 jq: sed for Json 10:05 yq: Like jq But For YAML 11:00 Teller: Universal Secrets Manager 13:11 GitHub CLI (gh): GitHub To Your Terminal 14:28 Devbox: Isolated Shells

via YouTube https://www.youtube.com/watch?v=7ItANF7eytU

·youtube.com·
DevOps Toolkit - 10 CLIs I Can Not Live Without! - https://www.youtube.com/watch?v=7ItANF7eytU
Bash-Oneliner
Bash-Oneliner
A collection of handy Bash One-Liners and terminal tricks for data processing and Linux system maintenance.
·onceupon.github.io·
Bash-Oneliner
sequinstream/sequin
sequinstream/sequin
An open source message stream built on Postgres.
·github.com·
sequinstream/sequin
Diving into OCI Image and Distribution 1.1 Support in Amazon ECR | Amazon Web Services
Diving into OCI Image and Distribution 1.1 Support in Amazon ECR | Amazon Web Services
AWS recently announced that Amazon Elastic Container Registry (Amazon ECR) now supports version 1.1 of the Open Container Initiative (OCI) Image and Distribution specifications. This latest version includes support for image referrers, as well as significant enhancements for distribution of non-image artifacts. We are excited about this set of new capabilities, which helps customers more […]
·aws.amazon.com·
Diving into OCI Image and Distribution 1.1 Support in Amazon ECR | Amazon Web Services
ASUS NUC 13 Rugged Short Review A Fanless Intel N50 System
ASUS NUC 13 Rugged Short Review A Fanless Intel N50 System

ASUS NUC 13 Rugged Short Review A Fanless Intel N50 System

ASUS NUC 13 Rugged Front The ASUS NUC 13 Rugged Short has a long name, but it also has a lot going for it as a low-power fanless platform from a vendor most…

August 3, 2024 at 05:10PM

via Instapaper

·servethehome.com·
ASUS NUC 13 Rugged Short Review A Fanless Intel N50 System
POSSE
POSSE
POSSE is an abbreviation for Publish (on your) Own Site, Syndicate Elsewhere, the practice of posting content on your own site first, then publishing copies or sharing links to third parties (like social media silos) with original post links to provide viewers a path to directly interacting with your content.
·indieweb.org·
POSSE
Shortcut: Dispatch Bulletin
Shortcut: Dispatch Bulletin
In which I describe a Shortcut I built to publish small notes and photos to my Jekyll microblog bulletin.sherif.io with just a few taps on iOS.
·sherif.io·
Shortcut: Dispatch Bulletin
US court blocks Biden administration net neutrality rules
US court blocks Biden administration net neutrality rules
A U.S. appeals court on Thursday blocked the Federal Communications Commission's reinstatement of landmark net neutrality rules, saying broadband providers are likely to succeed in a legal challenge.
·reuters.com·
US court blocks Biden administration net neutrality rules
Feds put Nvidia Run.ai deal under antitrust scrutiny
Feds put Nvidia Run.ai deal under antitrust scrutiny
The investigation adds to the global pressure by regulators to prevent control of artificial intelligence by just a handful of the world’s largest technology companies.
·politico.com·
Feds put Nvidia Run.ai deal under antitrust scrutiny