Found 188 bookmarks
Newest
Design Patterns in R
Design Patterns in R
Build robust and maintainable software with object-oriented design patterns in R. Design patterns abstract and present in neat, well-defined components and interfaces the experience of many software designers and architects over many years of solving similar problems. These are solutions that have withstood the test of time with respect to re-usability, flexibility, and maintainability. R6P provides abstract base classes with examples for a few known design patterns. The patterns were selected by their applicability to analytic projects in R. Using these patterns in R projects have proven effective in dealing with the complexity that data-driven applications possess.
·tidylab.github.io·
Design Patterns in R
Using References
Using References
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Using References in OpenAPI Descriptions OpenAPI Referencing is a powerful tool. It allows managing document size and complexity, and allows re-using shared components and avoiding copy-paste or change management errors. However, the history of referencing and the "$ref" keyword is complex, leading to slightly different behavior depending on the version of the OpenAPI Specification (OAS) that you are using, and on where in your OpenAPI Description (OAD) the reference occurs. There are also other "$ref"-like keywords ("operationRef", "mapping") and behaviors (referencing by component name or operation ID) that are related but somewhat different. Referencing can also be challenging to use due to incomplete and inconsistent support across different tools, some of which require references to be pre-processed before they can read the OAD. The resources in this section explain how to use referencing, and what to look for when assessing the referencing support in your OpenAPI tools.
The Future of References There are plans to reduce the complexity around references in future OpenAPI Specifications. The Moonwalk project is considering a different approach that imports complete documents, associates them with namespaces, and only supports referencing by component name (not "$ref"). A small example can be seen in the Moonwalk deployments proposal, and there are discussions around an initial draft proposal for imports and a few ideas on how to manage interactions with JSON Schema referencing. The proposed Workflows Specification is already using a "sourceDescription" field that is not unlike the Moonwalk proposal.
·learn.openapis.org·
Using References
Best Practices
Best Practices
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Keep a Single Source of Truth Regardless of your design approach (design-first or code-first) always keep a single source of truth, i.e., information should not be duplicated in different places. It is really the same concept used in programming, where repeated code should be moved to a common function.
Otherwise, eventually one of the places will be updated while the other won’t, leading to headaches… in the best of cases. For instance, it is also commonplace to use code annotations to generate an OpenAPI description and then commit the latter to source control while the former still lingers in the code. As a result, newcomers to the project will not know which one is actually in use and mistakes will be made. Alternatively, you can use a Continuous Integration test to ensure that the two sources stay consistent.
Add OpenAPI Descriptions to Source Control OpenAPI Descriptions are not just a documentation artifact: they are first-class source files which can drive a great number of automated processes, including boilerplate generation, unit testing and documentation rendering. As such, OADs should be committed to source control, and, in fact, they should be among the first files to be committed. From there, they should also participate in Continuous Integration processes.
Make the OpenAPI Descriptions Available to the Users Beautifully-rendered documentation can be very useful for the users of an API, but sometimes they might want to access the source OAD. For instance, to use tools to generate client code for them, or to build automated bindings for some language. Therefore, making the OAD available to the users is an added bonus for them. The documents that make up the OAD can even be made available through the same API to allow runtime discovery.
There is Seldom Need to Write OpenAPI Descriptions by Hand Since OADs are plain text documents, in an easy-to-read format (be it JSON or YAML), API designers are usually tempted to write them by hand. While there is nothing stopping you from doing this, and, in fact, hand-written API descriptions are usually the most terse and efficient, approaching any big project by such method is highly impractical. Instead, you should try the other existing creation methods and choose the one that better suits you and your team (No YAML or JSON knowledge needed!):
OpenAPI Editors: Be it text editors or GUI editors they usually take care of repetitive tasks, allow you to keep a library of reusable components and provide real-time preview of the generated documentation.
Domain-Specific Languages: As its name indicates, DSL’s are API description languages tailored to specific development fields. A tool is then used to produce the OpenAPI Description. A new language has to be learned, but, in return, extremely concise descriptions can be achieved.
Code Annotations: Most programming languages allow you to annotate the code, be it with specific syntax or with general code comments. These annotations, for example, can be used to extend a method signature with information regarding the API endpoint and HTTP method that lead to it. A tool can then parse the code annotations and generate OADs automatically. This method fits very nicely with the code-first approach, so keep in mind the first advice given at the top of this page when using it (Use a Design-First Approach)…
A Mix of All the Above: It’s perfectly possible to create the bulk of an OpenAPI Description using an editor or DSL and then hand-tune the resulting file. Just be aware of the second advice above (Keep a Single Source of Truth): Once you modify a file it becomes the source of truth and the previous one should be discarded (maybe keep it as backup, but out of the sight and reach of children and newcomers to the project).
Describing Large APIs
Do not repeat yourself (The DRY principle). If the same piece of YAML or JSON appears more than once in the document, it’s time to move it to the components section and reference it from other places using $ref (See Reusing Descriptions. Not only will the resulting document be smaller but it will also be much easier to maintain). Components can be referenced from other documents, so you can even reuse them across different API descriptions!
Split the description into several documents: Smaller files are easier to navigate, but too many of them are equally taxing. The key lies somewhere in the middle. A good rule of thumb is to use the natural hierarchy present in URLs to build your directory structure. For example, put all routes starting with /users (like /users and /users/{id}) in the same file (think of it as a “sub-API”). Bear in mind that some tools might have issues with large files, whereas some other tools might not handle too many files gracefully. The solution will have to take your toolkit into account.
Use tags to keep things organized: Tags have not been described in the Specification chapter, but they can help you arrange your operations and find them faster. A tag is simply a piece of metadata (a unique name and an optional description) that you can attach to operations. Tools, specially GUI editors, can then sort all your API’s operation by their tags to help you keep them organized.
Links to External Best Practices There’s quite a bit of literature about how to organize your API more efficiently. Make sure you check out how other people solved the same issues you are facing now! For example: The API Stylebook contains internal API Design Guidelines shared with the community by some well known companies and government agencies.
Best Practices This page contains general pieces of advice which do not strictly belong to the Specification Explained chapter because they are not directly tied to the OpenAPI Specification (OAS). However, they greatly simplify creating and maintaining OpenAPI Descriptions (OADs), so they are worth keeping in mind.
Use a Design-First Approach Traditionally, two main approaches exist when creating OADs: Code-first and Design-first. In the Code-first approach, the API is first implemented in code, and then its description is created from it, using code comments, code annotations or simply written from scratch. This approach does not require developers to learn another language so it is usually regarded as the easiest one. Conversely, in Design-first, the API description is written first and then the code follows. The first obvious advantages are that the code already has a skeleton upon which to build, and that some tools can provide boilerplate code automatically. There have been a number of heated debates over the relative merits of these two approaches but, in the opinion of the OpenAPI Initiative (OAI), the importance of using Design-first cannot be stressed strongly enough.
The reason is simple: The number of APIs that can be created in code is far superior to what can be described in OpenAPI. To emphasize: OpenAPI is not capable of describing every possible HTTP API, it has limitations. Therefore, unless these descriptive limitations are perfectly known and taken into account when coding the API, they will rear their ugly head later on when trying to create an OpenAPI description for it. At that point, the right fix will be to change the code so that it uses an API which can be actually described with OpenAPI (or switch to Design-first altogether). Sometimes, however, since it is late in the process, it will be preferred to twist the API description so that it matches more or less the actual API. It goes without saying that this leads to unintuitive and incomplete descriptions, that will rarely scale in the future. Finally, there exist a number of validation tools that can verify that the implemented code adheres to the OpenAPI description. Running these tools as part of a Continuous Integration process allows changing the OpenAPI Description with peace of mind, since deviations in the code behavior will be promptly detected. Bottom line: OpenAPI opens the door to a wealth of automated tools. Make sure you use them!
·learn.openapis.org·
Best Practices
Fast JSON, NDJSON and GeoJSON Parser and Generator
Fast JSON, NDJSON and GeoJSON Parser and Generator
A fast JSON parser, generator and validator which converts JSON, NDJSON (Newline Delimited JSON) and GeoJSON (Geographic JSON) data to/from R objects. The standard R data types are supported (e.g. logical, numeric, integer) with configurable handling of NULL and NA values. Data frames, atomic vectors and lists are all supported as data containers translated to/from JSON. GeoJSON data is read in as simple features objects. This implementation wraps the yyjson C library which is available from .
·coolbutuseless.github.io·
Fast JSON, NDJSON and GeoJSON Parser and Generator
How Do We Get the Entrata Integration Set Up Properly and Give Repli Access?
How Do We Get the Entrata Integration Set Up Properly and Give Repli Access?
The Entrata Integration is a property management software that Repli utilizes to assist in reporting, accessing real-time data, and more.
Property Units - These endpoints below will allow us to retrieve all property unit information and sync to our system for real-time updates.
getAmenities getMitsPropertyUnits getPropertyUnits getSpecials getUnitsAvailablilityAndPricing getUnitTypes
Properties - These endpoints below allow us to retrieve all property information and sync to our system for real-time updates.
getAmenityReservations getCalendarAvailability getFloorPlans getPetTypes getProperties getPropertyAddOns getPropertyAnnouncements getPropertyPickList getRentableItems getReservableAmenities getWebSites
·support.repli360.com·
How Do We Get the Entrata Integration Set Up Properly and Give Repli Access?
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
In the digital era, the widespread use of APIs is evident. However, scalable utilization of APIs poses a challenge due to structure divergence observed in online API documentation. This underscores the need for automat…
·ar5iv.labs.arxiv.org·
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
REST API in R with plumber
REST API in R with plumber
API and R Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package.
Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package
# When an API is started it might take some time to initialize # this function stops the main execution and wait until # plumber API is ready to take queries. wait_for_api <- function(log_path, timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) if(any(grepl(readLines(log_path), pattern = "Running plumber API"))) { return(invisible()) } } stop("Waiting timed!") }
Oh, in some examples I am using redis. So, before you dive in, make sure to fire up a simple redis server. At the end of the script, I’ll be turning redis off, so you don’t want to be using it for anything else at the same time. I just want to remind you that this code isn’t meant to be run on a production server.
redis is launched in a background, , so you might want to wait a little bit to make sure it’s fully up and running before moving on.
wait_for_redis <- function(timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) status <- suppressWarnings(system2("redis-cli", "PING", stdout = TRUE, stderr = TRUE) == "PONG") if(status) { return(invisible()) } } stop("Redis waiting timed!") }
First off, let’s talk about logging. I try to log as much as possible, especially in critical areas like database accesses, and interactions with other systems. This way, if there’s an issue in the future (and trust me, there will be), I should be able to diagnose the problem just by looking at the logs alone. Logging is like “print debugging” (putting print(“I am here”), print(“I am here 2”) everywhere), but done ahead of time. I always try to think about what information might be needed to make a correct diagnosis, so logging variable values is a must. The logger and glue packages are your best friends in that area.
Next, it might also be useful to add a unique request identifier ((I am doing that in setuuid filter)) to be able to track it across the whole pipeline (since a single request might be passed across many functions). You might also want to add some other identifiers, such as MACHINE_ID - your API might be deployed on many machines, so it could be helpful for diagnosing if the problem is associated with a specific instance or if it’s a global issue.
In general you shouldn’t worry too much about the size of the logs. Even if you generate ~10KB per request, it will take 100000 requests to generate 1GB. And for the plumber API, 100000 requests generated in a short time is A LOT. In such scenario you should look into other languages. And if you have that many requests, you probably have a budget for storing those logs:)
It might also be a good idea to setup some automatic system to monitor those logs (e.g. Amazon CloudWatch if you are on AWS). In my example I would definitely monitor Error when reading key from cache string. That would give me an indication of any ongoing problems with API cache.
Speaking of cache, you might use it to save a lot of resources. Caching is a very broad topic with many pitfalls (what to cache, stale cache, etc) so I won’t spend too much time on it, but you might want to read at least a little bit about it. In my example, I am using redis key-value store, which allows me to save the result for a given request, and if there is another requests that asks for the same data, I can read it from redis much faster.
Note that you could use memoise package to achieve similar thing using R only. However, redis might be useful when you are using multiple workers. Then, one cached request becomes available for all other R processes. But if you need to deploy just one process, memoise is fine, and it does not introduce another dependency - which is always a plus.
info <- function(req, ...) { do.call( log_info, c( list("MachineId: {MACHINE_ID}, ReqId: {req$request_id}"), list(...), .sep = ", " ), envir = parent.frame(1) ) }
#* Log some information about the incoming request #* https://www.rplumber.io/articles/routing-and-input.html - this is a must read! #* @filter setuuid function(req) { req$request_id <- UUIDgenerate(n = 1) plumber::forward() }
#* Log some information about the incoming request #* @filter logger function(req) { if(!grepl(req$PATH_INFO, pattern = "PATH_INFO")) { info( req, "REQUEST_METHOD: {req$REQUEST_METHOD}", "PATH_INFO: {req$PATH_INFO}", "HTTP_USER_AGENT: {req$HTTP_USER_AGENT}", "REMOTE_ADDR: {req$REMOTE_ADDR}" ) } plumber::forward() }
To run the API in background, one additional file is needed. Here I am creating it using a simple bash script.
library(plumber) library(optparse) library(uuid) library(logger) MACHINE_ID <- "MAIN_1" PORT_NUMBER <- 8761 log_level(logger::TRACE) pr("tmp/api_v1.R") %>% pr_run(port = PORT_NUMBER)
·zstat.pl·
REST API in R with plumber
Giving Opportunities at Saint Jude
Giving Opportunities at Saint Jude
Link to Shane Matthew Connolly Memorial Scholarship Fund Description on St Jude website
Shane Matthew Connolly Memorial Scholarship Fund This scholarship was established in loving memory of Shane Connolly in 2021 by his parents, Patrick and Annie Connolly, and his sisters, Lauren and Maeve ('03). Shane was a former student at Saint Jude and a graduate of the Marist School. The scholarship is awarded to a rising 8th grader, boy or girl, who demonstrates outstanding achievement both in the classroom and on the playing field, and who does so while emulating Shane's humility, unfailingly even-tempered sportsmanship, and consistently exhibited concern for and kindness to others.
For Website
·saintjude.net·
Giving Opportunities at Saint Jude
tree.nathanfriend.io
tree.nathanfriend.io
An online tree-like utility for generating ASCII folder structure diagrams. Written in TypeScript and React.
·tree.nathanfriend.io·
tree.nathanfriend.io
Finding Undocumented APIs
Finding Undocumented APIs
Introduction, case studies, and exercises for finding and using undocumented APIs hidden in plain sight.
·inspectelement.org·
Finding Undocumented APIs
Shiny
Shiny
Shiny is a package that makes it easy to create interactive web apps using R and Python.
Shiny was designed with an emphasis on distinct input and output components in the UI. Inputs send values from the client to the server, and when the server has values for the client to display, they are received and rendered by outputs.
You want the server to trigger logic on the client that doesn’t naturally relate to any single output.
You want the server to update a specific (custom) output on the client, but not by totally invalidating the output and replacing the value, just making a targeted modification.
You have some client JavaScript that isn’t related to any particular input, yet wants to trigger some behavior in R. For example, binding keyboard shortcuts on the web page to R functions on the server, or alerting R when the size of the browser window has changed.
·shiny.posit.co·
Shiny
Rectangling
Rectangling
Rectangling is the art and craft of taking a deeply nested list (often sourced from wild caught JSON or XML) and taming it into a tidy data set of rows and columns. This vignette introduces you to the main rectangling tools provided by tidyr: `unnest_longer()`, `unnest_wider()`, and `hoist()`.
·tidyr.tidyverse.org·
Rectangling
hendrikvanb
hendrikvanb
Working with complex, hierarchically nested JSON data in R can be a bit of a pain. In this post, I illustrate how you can convert JSON data into tidy tibbles with particular emphasis on what I’ve found to be a reasonably good, general approach for converting nested JSON into nested tibbles. I use three illustrative examples of increasing complexity to help highlight some pitfalls and build up the logic underlying the approach before applying it in the context of some real-world rock climbing competition data.
·hendrikvanb.gitlab.io·
hendrikvanb
JSON files & tidy data | The Byrd Lab
JSON files & tidy data | The Byrd Lab
My lab investigates how blood pressure can be treated more effectively. Much of that work involves the painstaking development of new concepts and research methods to move forward the state of the art. For example, our work on urinary extracellular vesicles’ mRNA as an ex vivo assay of the ligand-activated transcription factor activity of mineralocorticoid receptors is challenging, fun, and rewarding. With a lot of work from Andrea Berrido and Pradeep Gunasekaran in my lab, we have been moving the ball forward on several key projects on that front.
·byrdlab.org·
JSON files & tidy data | The Byrd Lab