Development

Development

1810 bookmarks
Newest
UNCHARTED DATA: Using Crosstalk to Add User-Interactivity
UNCHARTED DATA: Using Crosstalk to Add User-Interactivity
Linking an interactive plot and table together with the crosstalk package.
Using Crosstalk to Add User-Interactivity
The goal is to link the reactable table I created to a plotly chart and provide additional filter options that control both the table and the chart.
An important note: in order to use crosstalk, you must create a shared dataset and call that dataset within both plotly and reactable. Otherwise, your dataset will not communicate and filter with eachother. The code to do this is SharedData$new(dataset).
If you expand the code below, you’ll see that the code to build a table in reactable is quite extensive. I will not go into the details in this post, but do recommend a couple great tutorials that I used to create the interactive table such as this tutorial from Greg Lin, and this from Tom Mock which really helped me understand how to use CSS and Google fonts to enhance the visual appeal of the table (see the “Additional CSS Used for Table” section below for more info).
If you have ever built something in Shiny before, you’ll notice that the crosstalk filters are very similar. You can add a filter to any existing column in the dataset. As you can see in the code below, I used a mixture of filter_checkbox and filter_select depending on how many unique options were available in the column you’re filtering. My rule of thumb is if there are more than five options to choose from it’s probably better to put them into a list in filter_select like I did with the Division filtering as to not take up too much space on the page.
For the layout of the data visualization, I used bscols to place the crosstalk filters side-by-side with the interactive plotly chart. I then placed the reactable table underneath and added a legend to the table using tags from the htmltools package. The final result is shown below. Feel free to click around and the filters and you will notice that both the plot and the table will filter accordingly. Another option is to drag and click on the plot and you will see the table underneath mimic the teams shown.
·uncharteddata.netlify.app·
UNCHARTED DATA: Using Crosstalk to Add User-Interactivity
Design Patterns in R
Design Patterns in R
Build robust and maintainable software with object-oriented design patterns in R. Design patterns abstract and present in neat, well-defined components and interfaces the experience of many software designers and architects over many years of solving similar problems. These are solutions that have withstood the test of time with respect to re-usability, flexibility, and maintainability. R6P provides abstract base classes with examples for a few known design patterns. The patterns were selected by their applicability to analytic projects in R. Using these patterns in R projects have proven effective in dealing with the complexity that data-driven applications possess.
·tidylab.github.io·
Design Patterns in R
The most efficient way to manage snapshot tests in R.
The most efficient way to manage snapshot tests in R.
Use CI and Github API
Snapshot testing gets difficult when there is more than one variant of the same result. The reason why snapshot testing might be discouraging is due to the fact that snapshots will most likely fail due to environment settings. If one person runs the tests on a Mac and another on a Linux machine, the snapshots of rendered images will almost certainly be different. Comparing these snapshots will result in a failed test even though the code is correct. Add CI to the mix, and you have a hot mess.
The easiest solution is to introduce variants. Variants are versions of snapshots which were created on different environments. In {testthat} variants are stored in separate directories. You can pass a name of the variant to the variant argument of testthat::test_snapshot. If you have a Linux, set variant = "linux", if you have a Mac, set variant = "mac".
Use snapshots generated on CI as the source of truth. Don’t check in snapshots generated on your machine. Generate them on CI and download them to your machine instead.
Step 1: Archive snapshots on CI Add this step to you CI testing workflow to allow downloading generated snapshots.
- name: Archive test snapshots if: always() uses: actions/upload-artifact@v3 with: name: test-snapshots path: | tests/testthat/_snaps/**/**/*
Step 2: Detect the environment to create variants We can create a make_variant function to detect the version of the platform, as well as if we are running on CI. This way even if we use the same OS on CI and locally, we can still differentiate between snapshots generated on CI and locally.
#' tests/testthat/setup.R is_ci <- function() { isTRUE(as.logical(Sys.getenv("CI"))) } make_variant <- function(platform = shinytest2::platform_variant()) { ci <- if (is_ci()) "ci" else NULL paste(c(ci, platform), collapse = "-") } # In tests: testthat::expect_snapshot(..., variant = make_variant())
Step 3: Ignore your local snapshots Don’t check in snapshots generated on your machine. Add them to .gitignore instead. Copy tests/testthat/_snaps/linux-4.4 This way we can still generate snapshots locally to get fast feedback, but we’ll only keep a single source of truth checked in the repository. Since you don’t track changes in local snapshots, you need to regenerate them before you start making changes to see if they change. It adds some complexity to the process, but it allows to keep the number of shared snapshots in the version control minimal. Alternatively, you can keep local snapshots, but when doing code review, focus only on the ones generated on CI.
Step 4: Automate downloading snapshots from CI To update snapshots generated on CI in Github, we need to: Go to Actions. Find our workflow run. Download the test-snapshots artifact. Unpack and overwrite the local snapshots. testthat::snapshot_review() to review the changes. Commit and push the changes. This is a lot of steps. We can automate the most laborious ones with Github API.
The .download_ci_snaps function will: Get the list of artifacts in the repository identified by repo and owner. It’ll search workflows generated from the branch we’re currently on. It will download the latest artifact with the provided name (in our case its “test-snapshots”) in the repository Unzip them and overwrite the local copy of snapshots.
·jakubsob.github.io·
The most efficient way to manage snapshot tests in R.
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
Prompt Storm - A Powerful Easy to use AI Prompt Engineering Chrome Extension for ChatGPT, Google's Gemini, and Anthropic's Claude. With just a few clicks you can get the answers you're looking for, create amazing writing, marketing and social media strategies, save time and boost your productivity.
·promptstorm.app·
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
HTTP resources and specifications - HTTP | MDN
HTTP resources and specifications - HTTP | MDN
HTTP was first specified in the early 1990s. Designed with extensibility in mind, it has seen numerous additions over the years; this lead to its specification being scattered through numerous specification documents (in the midst of experimental abandoned extensions). This page lists relevant resources about HTTP.
·developer.mozilla.org·
HTTP resources and specifications - HTTP | MDN
The OpenAPI Specification Explained
The OpenAPI Specification Explained
For API designers and writers wishing formalize their API in an OpenAPI Description document.
The OpenAPI Specification Explained The OpenAPI Specification is the ultimate source of knowledge regarding this API description format. However, its length is daunting to newcomers and makes it hard for experienced users to find specific bits of information. This chapter provides a soft landing for readers not yet familiar with OpenAPI and is organized by topic, simplifying browsing. The following pages introduce the syntax and structure of an OpenAPI Description (OAD), its main building blocks and a minimal API description. Afterwards, the different blocks are detailed, starting from the most common and progressing towards advanced ones. Structure of an OpenAPI Description: JSON, YAML, openapi and info API Endpoints: paths and responses. Content of Message Bodies: content and schema. Parameters and Payload of an Operation: parameters and requestBody. Reusing Descriptions: components and $ref. Providing Documentation and Examples: description and example/examples. API Servers: servers.
·learn.openapis.org·
The OpenAPI Specification Explained
Overlays
Overlays
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Introduction to OpenAPI Overlay Specification The Overlay Specification defines a document format for information that transforms an existing OpenAPI description yet remains separate from the OpenAPI description’s source document(s). The Overlay Specification defines a mechanism for providing consistent, deterministic updates to a given OpenAPI description, as an aid to automation throughout the API lifecycle. An Overlay can be applied to an OpenAPI description, resulting in an updated OpenAPI description. OpenAPI + Overlays = (better) OpenAPI One Overlay might be specific to one OpenAPI description, or general enough to be used with multiple OpenAPI descriptions. Equally, one OpenAPI description pipeline might apply different Overlays during the workflow.
Use cases for Overlays Overlays support a range of scenarios, including: Translating documentation into another language Providing configuration information for different deployment environments Allowing separation of concerns for metadata such as gateway configuration or SLA information Supporting a traits-like capability for applying a set of configuration data, such as multiple parameters or multiple headers, for targeted objects Providing default responses or parameters where they were not explicitly provided Applying configuration data globally or based on filter conditions
Resources for working with Overlays The GitHub repository for Overlays is the main hub of activity on the Overlays project. Check the issues and pull requests for what is currently in progress, and the discussions for details of future ideas and our regular meetings. The project maintains a list of tools for working with Overlays.
·learn.openapis.org·
Overlays
Example: add params selectively
Example: add params selectively
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Example: Add multiple parameters to selected operations One of the most requested features for OpenAPI is the ability to group parameters and easily apply all of them together, to either some or all operations in an OpenAPI description. Especially for common parameters that always come as a set (pagination or filter parameters are a great example), it can be more maintainable to use them as a “trait” and apply the set as part of the API lifecycle rather than trying to maintain a source of truth with a lot of repetition. This approach leads to good API governance, since if the collection of fields changes then the update is consistently applied through automation. In the following example, any operations with the extension x-supports-filters set to true will have two inline parameters added to their parameter collection, and an x-filters-added tag for decoration/debugging.
·learn.openapis.org·
Example: add params selectively
Example: tag DELETE operations
Example: tag DELETE operations
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Example: add a license Every API needs a license so people know they can use it, but what if your OpenAPI descriptions don’t have a license? This example shows an Overlay that adds a license to an OpenAPI description. Here’s the Overlay file, with just one action to add or change the info.license fields: overlay: 1.0.0 info: title: Add MIT license version: 1.0.0 actions: - target: '$.info' update: license: name: MIT url: https://opensource.org/licenses/MIT You can use this Overlay with different OpenAPI files to make the same change to a batch of files.
Example: tag DELETE operations To add the same tag to all operations in an OpenAPI description that use DELETE methods, use an Overlay like the example below. This example adds an x-restricted tag to all delete operations: overlay: 1.0.0 info: title: Tag delete operations as restricted version: 1.0.0 actions: - target: $.paths.*.delete update: tags: - x-restricted This overlay adds x-restricted to the tags array for each delete operation. If the tags array doesn’t exist, it’ll be created; if it does, the new tag is added to the existing array. You can use an approach like this to make other changes to all matching operations.
·learn.openapis.org·
Example: tag DELETE operations
Remove References
Remove References
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Resolve References A reference is said to be resolved within a tool if: Its target has been identified Any modifications to the target required by the OAS (e.g. because of fields adjacent to "$ref") have been performed on a copy of the target The resulting target value has been associated with the reference source in some way that the tool can easily use when needed A reference is said to be removed if it has been replaced by its (possibly modified) target. Reference resolution usually preserves the referencing information such as the URI used to identify the target, while reference removal usually discards it. In many cases this is not significant, except that not knowing how the parsed OAD relates to the references in your JSON or YAML document may make debugging more difficult. While plain JSON documents form a tree structure, an OpenAPI Description with resolved references is not necessarily a tree, but a graph. Tools that resolve references in-memory and work with the graph structure can process all OADs. Tools that rely on reference removal, either as part of the tool or by a separate pre-processing tool, can only support OADs that form trees.
·learn.openapis.org·
Remove References
References Overview
References Overview
For API designers and writers wishing formalize their API in an OpenAPI Description document.
What are references? A reference is a keyword and value that identifies a reference target with a URI. In some cases, this URI can be treated as a URL and de-referenced directly. In other cases, as we will see in the (forthcoming) guide to resolving references, it is helpful to separate the target’s identity from its location. External references are how multiple documents are linked into a single OpenAPI Description (OAD). This means that referencing impacts how other linkages, such as those that use Components Object names, or values such as operationId in the Path Item Object, work. These other linkages can only work if the document (or with many tools, the specific JSON object) containing the name or other identifier has been referenced.
A taxonomy of references References exist in several variations in the OpenAPI Specification (OAS) versions 3.0 and 3.1, as shown in the following table. Note that an adjacent keyword is a keyword in the same JSON Object (whether it is written in JSON or YAML) as the reference keyword.
·learn.openapis.org·
References Overview
Using References
Using References
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Using References in OpenAPI Descriptions OpenAPI Referencing is a powerful tool. It allows managing document size and complexity, and allows re-using shared components and avoiding copy-paste or change management errors. However, the history of referencing and the "$ref" keyword is complex, leading to slightly different behavior depending on the version of the OpenAPI Specification (OAS) that you are using, and on where in your OpenAPI Description (OAD) the reference occurs. There are also other "$ref"-like keywords ("operationRef", "mapping") and behaviors (referencing by component name or operation ID) that are related but somewhat different. Referencing can also be challenging to use due to incomplete and inconsistent support across different tools, some of which require references to be pre-processed before they can read the OAD. The resources in this section explain how to use referencing, and what to look for when assessing the referencing support in your OpenAPI tools.
The Future of References There are plans to reduce the complexity around references in future OpenAPI Specifications. The Moonwalk project is considering a different approach that imports complete documents, associates them with namespaces, and only supports referencing by component name (not "$ref"). A small example can be seen in the Moonwalk deployments proposal, and there are discussions around an initial draft proposal for imports and a few ideas on how to manage interactions with JSON Schema referencing. The proposed Workflows Specification is already using a "sourceDescription" field that is not unlike the Moonwalk proposal.
·learn.openapis.org·
Using References