GoogleCloudPlatform/opentelemetry-cloud-run
02-AREAS
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
In the digital era, the widespread use of APIs is evident. However, scalable utilization of APIs poses a challenge due to structure divergence observed in online API documentation. This underscores the need for automat…
GCP Reference — Cloud Custodian documentation
Getting started with shinytest2
Cursor Directory
Find the best cursor rules for your framework and language
LLM Text - Perfect Agent Context for any URL
PowerShell Automatic Variables: Special Variables Built into PowerShell
Learn about PowerShell's automatic variables - built-in special variables that serve specific purposes. Discover how to work with history limits, constants, exit codes, and null values.
Awesome Software Architecture
Curated list of awesome articles and resources to learn and practice about software architecture, patterns and principles.
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
The Software Architecture Canvas is a collaborative technique for elaborating the software architecture playground of a software initiative. With this canvas, you can work efficiently, iteratively, and in a time-saving manner on the software architecture of your software products as a team sport.
Architecture Principles: An approach to effective decision making in software architecture
Are you a software architect and often find it difficult to make architecture decisions in your team? This article shows you how to use architecture principles to make effective decisions in your team.
A declarative statement made with the intention of guiding architectural design decisions in order to achieve one or more qualities of a system.
If we take a closer look at this definition, we find several interesting parts in this definition.
"[...] intention of guiding architectural design decisions [...]"
As a software architect or a team of software engineers, you have to deal with and decide on many architecture issues.
But how do you decide these questions? Gut feeling? :-)
That's is probably not the right approach.
As we learn from the Software Architecture Canvas, there are quality goals that are drivers of architecture.
What are the basic characteristics of good architecture principles?
Comprehensible & clear
Architectural principles should be like marketing slogans.
Testable
The principle should be verifiable, whether work is done according to the principle and where exceptions are made.
Atomic
The principle requires no further context or knowledge to be understood.
In summary, architectural principles should be written to enable teams to make decisions: they're clear, provide decision support, and are atomic.
What are the pitfalls of creating architecture principles?
What do you think about the following principle 👇?
"All software should be written in a scalable manner."
That's why we've adopted in a product team the following architecture principle.
"Use cloud services if being lock-in to a particular cloud provider is acceptable."
Whether this vendor lock-in is acceptable depends on several criteria:
The effort required to replace this managed service
An acceptable lead time for providing alternatives.
Let's take a look at an example technological decision we had to make in the past:
We needed to evaluate a centralised identity and access management solution for our SaaS products.
In addition to meeting the functional requirements, we had two powerful IAM solutions on the shortlist:
Keycloak (self-hosted)
Auth0 (Managed, cloud service)
Following the defined principle of "Use cloud services if being lock-in to a particular cloud provider is acceptable." we have concluded that a centralised IAM system should be self-managed and not managed by a third-party provider because it's a huge effort to replace a managed IAM product and therefore there is no reasonable lead time to deploy an alternative.
In summary, vendor locking wasn't acceptable to us in this case. So this principle efficiently guides us to the right decision.
Example 2: "Prefer standard data formats over third-party and custom formats"
The next principle was about the selection of protocols for service communication.
"Prefer standard data formats over third-party and custom formats"
If you have multiple services that need to communicate with each other, the question of protocol and format arises.
In the protocol ecosystem there is a fairly new kid on the block: gRPC
gRPC (gRPC Remote Procedure Calls) is a cross-platform, open-source, high-performance protocol for remote procedure calls. gRPC was originally developed by Google to connect a large number of microservices.
So in our team, the question is: RESTful HTTP vs. gRPC?
The selection of a protocol thus depends heavily on the quality and change scenarios of the services involved.
But if you can meet the quality goals and underlying requirements with both options, like RESTful HTTP vs. gRPC, then consider yourself lucky to have such a principle.
This principle helped us choose RESTful HTTP over gRPC because RESTful HTTP is a widely accepted standard data format, while gRPC is more of a third-party format.
So here this principle speeds up our decision making, which doesn't mean that we don't rely on gRPC in certain cases.
Software architecture may be changing in the way it's practiced, but it's more important than ever.
Typescript Transpiler Explained
Learn about the Typescript Transpiler, its benefits, setup, transpilation vs. compilation, controlling options, automation with watch mode, and more. Simplify your Typescript coding experience.
Phind
Phind is a fast and intelligent AI answer engine. Focused on helping you solve challenging problems, Phind gets you from an idea to a working product.
REST API in R with plumber
API and R Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package.
Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package
# When an API is started it might take some time to initialize # this function stops the main execution and wait until # plumber API is ready to take queries. wait_for_api <- function(log_path, timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) if(any(grepl(readLines(log_path), pattern = "Running plumber API"))) { return(invisible()) } } stop("Waiting timed!") }
Oh, in some examples I am using redis. So, before you dive in, make sure to fire up a simple redis server. At the end of the script, I’ll be turning redis off, so you don’t want to be using it for anything else at the same time. I just want to remind you that this code isn’t meant to be run on a production server.
redis is launched in a background, , so you might want to wait a little bit to make sure it’s fully up and running before moving on.
wait_for_redis <- function(timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) status <- suppressWarnings(system2("redis-cli", "PING", stdout = TRUE, stderr = TRUE) == "PONG") if(status) { return(invisible()) } } stop("Redis waiting timed!") }
First off, let’s talk about logging. I try to log as much as possible, especially in critical areas like database accesses, and interactions with other systems. This way, if there’s an issue in the future (and trust me, there will be), I should be able to diagnose the problem just by looking at the logs alone. Logging is like “print debugging” (putting print(“I am here”), print(“I am here 2”) everywhere), but done ahead of time. I always try to think about what information might be needed to make a correct diagnosis, so logging variable values is a must. The logger and glue packages are your best friends in that area.
Next, it might also be useful to add a unique request identifier ((I am doing that in setuuid filter)) to be able to track it across the whole pipeline (since a single request might be passed across many functions). You might also want to add some other identifiers, such as MACHINE_ID - your API might be deployed on many machines, so it could be helpful for diagnosing if the problem is associated with a specific instance or if it’s a global issue.
In general you shouldn’t worry too much about the size of the logs. Even if you generate ~10KB per request, it will take 100000 requests to generate 1GB. And for the plumber API, 100000 requests generated in a short time is A LOT. In such scenario you should look into other languages. And if you have that many requests, you probably have a budget for storing those logs:)
It might also be a good idea to setup some automatic system to monitor those logs (e.g. Amazon CloudWatch if you are on AWS). In my example I would definitely monitor Error when reading key from cache string. That would give me an indication of any ongoing problems with API cache.
Speaking of cache, you might use it to save a lot of resources. Caching is a very broad topic with many pitfalls (what to cache, stale cache, etc) so I won’t spend too much time on it, but you might want to read at least a little bit about it. In my example, I am using redis key-value store, which allows me to save the result for a given request, and if there is another requests that asks for the same data, I can read it from redis much faster.
Note that you could use memoise package to achieve similar thing using R only. However, redis might be useful when you are using multiple workers. Then, one cached request becomes available for all other R processes. But if you need to deploy just one process, memoise is fine, and it does not introduce another dependency - which is always a plus.
info <- function(req, ...) { do.call( log_info, c( list("MachineId: {MACHINE_ID}, ReqId: {req$request_id}"), list(...), .sep = ", " ), envir = parent.frame(1) ) }
#* Log some information about the incoming request #* https://www.rplumber.io/articles/routing-and-input.html - this is a must read! #* @filter setuuid function(req) { req$request_id <- UUIDgenerate(n = 1) plumber::forward() }
#* Log some information about the incoming request #* @filter logger function(req) { if(!grepl(req$PATH_INFO, pattern = "PATH_INFO")) { info( req, "REQUEST_METHOD: {req$REQUEST_METHOD}", "PATH_INFO: {req$PATH_INFO}", "HTTP_USER_AGENT: {req$HTTP_USER_AGENT}", "REMOTE_ADDR: {req$REMOTE_ADDR}" ) } plumber::forward() }
To run the API in background, one additional file is needed. Here I am creating it using a simple bash script.
library(plumber) library(optparse) library(uuid) library(logger) MACHINE_ID <- "MAIN_1" PORT_NUMBER <- 8761 log_level(logger::TRACE) pr("tmp/api_v1.R") %>% pr_run(port = PORT_NUMBER)
SPA Mode | Remix
From the beginning, Remix's opinion has always been that you own your server architecture. This is why Remix is built on top of the Web Fetch API and can run on any modern runtime via built-in or community-provided adapters. While we believe that having a server provides the best UX/Performance/SEO/etc. for most apps, it is also undeniable that there exist plenty of valid use cases for a Single Page Application in the real world:
SPA Mode is basically what you'd get if you had your own React Router + Vite setup using createBrowserRouter/RouterProvider, but along with some extra Remix goodies:
File-based routing (or config-based via routes())
Automatic route-based code-splitting via route.lazy
<Link prefetch> support to eagerly prefetch route modules
<head> management via Remix <Meta>/<Links> APIs
SPA Mode tells Remix that you do not plan on running a Remix server at runtime and that you wish to generate a static index.html file at build time and you will only use Client Data APIs for data loading and mutations.
The index.html is generated from the HydrateFallback component in your root.tsx route. The initial "render" to generate the index.html will not include any routes deeper than root. This ensures that the index.html file can be served/hydrated for paths beyond / (i.e., /about) if you configure your CDN/server to do so.
Using Powershell scripts to start and stop Windows Services - Davidson Sousa
If you have a slow computer and want to save some memory or processing when you are not coding, this is for you
Mastering Windows Services: Unleashing the Power of PowerShell for Streamlined Management and Automation
5 Essential Tips for Running Windows PowerShell as a Windows Service
jsonsystems/public
Contribute to jsonsystems/public development by creating an account on GitHub.
sourcemeta/awesome-jsonschema: A curated list of awesome JSON Schema resources, tutorials, tools, and more
A curated list of awesome JSON Schema resources, tutorials, tools, and more - sourcemeta/awesome-jsonschema
Taming My ADHD with Obsidian and PowerShell
Alleviating my ADHD headaches with Obsidian. Periodic Notes and Templater extensions save the day by reminding me of the next step towards my larger goals.
How to enhance your R shiny application with httpOnly Cookies
httpOnly Cookies are crucial for security, protecting against cross-site scripting attacks in R Shiny apps. Read more about them here.
Draft for adding OAuth support to shiny by thohan88 · Pull Request #518 · r-lib/httr2
Info: This is a draft for discussion purposes. It's not a polished PR and currently includes minimal error handling and documentation. It may be big enough to warrant a separate package, bu...
Roxygen R6 Guide
mlr3: Machine Learning in R - next generation. Contribute to mlr-org/mlr3 development by creating an account on GitHub.
shiny/R/history.R at main · rstudio/shiny
Shiny Query String and Hash Manipulation
shiny/R/graph.R at main · rstudio/shiny
Shiny reactlog visualizer and R6 class
Cloud Run Websocket support now allows you to deploy a R Shiny Server as a serverless app to GCP Cloud Run
Cloud Run Websocket support now allows you to deploy a R Shiny Server - a dashboard hosting tool to host R Shiny dashhboards - as a serverless app to GCP Cloud Run
Fastest Growing R Shiny App Store
Showcase your R shiny application and grow your user base. Real time usage stats and reviews for all your apps. Contribute today.
Why you should learn Javascript to master R Shiny. And how to get started - datahabits.io
Although the concealment of Javascript is by design and makes Shiny in the first instance easy to use, in the long run when you want to build serious and more visual appealing apps, you most likely need to utilize javascript to make most of the web framework
colearendt/tidyjson: Tidy your JSON data in R with tidyjson
Tidy your JSON data in R with tidyjson.
mgirlich/jsontools: Helpers to work with JSON in R
Helpers to work with JSON in R.
The R Graph Gallery – Help and inspiration for R charts
The R graph gallery displays hundreds of charts made with R, always providing the reproducible code.