Shiny Baseball

R
Effective State Management in Shiny Modules: A React-Inspired Approach
Learn how to manage state in Shiny modules using a React-inspired approach with event handlers for better control and flexibility.
Shiny Packaging Custom JS
Shiny is a package that makes it easy to create interactive web apps using R and Python.
Shiny Custom Input Bindings
Shiny is a package that makes it easy to create interactive web apps using R and Python.
Shiny Sending Messages
Shiny is a package that makes it easy to create interactive web apps using R and Python.
Shiny Selectize Input
Shiny is a package that makes it easy to create interactive web apps using R and Python.
Introducing fodr: a package for French open data in R
Nowadays, more and more government organisations subscribe to the open data movement and some have done so in France, in the hopes that new services or insights would come from the analysis of this data.
Testing Legacy Shiny Apps: Start with Behavior, Not Code
Adding acceptance tests first makes refactoring safer.
Enterprise UI Design: Professional Bootstrap 5 for Shiny Apps
Master enterprise-grade UI/UX design for Shiny applications using Bootstrap 5, bslib theming, and professional design systems. Learn to create accessible, responsive interfaces that meet corporate standards for biostatistics and clinical research applications.
Teaching chat apps about R packages - Posit
Simon Couch demonstrates how the btw package provides context to LLMs through system prompts and tool calls.
shinymgr: A Framework for Building, Managing, and Stitching Shiny Modules into Reproducible Workflows
The R package shinymgr provides a unifying framework that allows Shiny developers to create, manage, and deploy a master Shiny application comprised of one or more "apps", where an "app" is a tab-based workflow that guides end-users through a step-by-step analysis. Each tab in a given "app" consists of one or more Shiny modules. The shinymgr app builder allows developers to "stitch" Shiny modules together so that outputs from one module serve as inputs to the next, creating an analysis pipeline that is easy to implement and maintain. Apps developed using shinymgr can be incorporated into R packages or deployed on a server, where they are accessible to end-users. Users of shinymgr apps can save analyses as an RDS file that fully reproduces the analytic steps and can be ingested into an RMarkdown or Quarto report for rapid reporting. In short, developers use the shinymgr framework to write Shiny modules and seamlessly combine them into Shiny apps, and end-users of these apps can execute reproducible analyses that can be incorporated into reports for rapid dissemination. A comprehensive overview of the package is provided by 12 learnr tutorials.
Futureverse
A Unifying Parallelization Framework in R for Everyone
Get started - xlcharts
You Don’t Need Airflow: Orchestrate Many Data Flows in R with Maestro – data-in-flight
Shiny App Workflows
This is a book that covers the standard shiny app
workflow.
autodb: Automatic Database Normalisation for Data Frames
Automatic normalisation of a data frame to third normal form, with the intention of easing the process of data cleaning. (Usage to design your actual database for you is not advised.) Originally inspired by the 'AutoNormalize' library for 'Python' by 'Alteryx' (<a href="https://github.com/alteryx/autonormalize" target="_top"https://github.com/alteryx/autonormalize/a>), with various changes and improvements. Automatic discovery of functional or approximate dependencies, normalisation based on those, and plotting of the resulting "database" via 'Graphviz', with options to exclude some attributes at discovery time, or remove discovered dependencies at normalisation time.
Advanced Tidyverse
Use piped workflows for efficient data cleaning and visualization.
Technical Guidelines for R
Best practices with R around select topics.
BillPetti/baseballr: A package written for R focused on baseball analysis. Currently in development.
A package written for R focused on baseball analysis. Currently in development. - BillPetti/baseballr
Add Authentication and SSO to Your Shiny App
Learn how to implement strong authentication and SSO in Shiny apps with Descope. This guide integrates both OIDC and SAML with Posit Connect for seamless login.
Powerful Classes for HTTP Requests and Responses
In order to facilitate parsing of http requests and creating appropriate responses this package provides two classes to handle a lot of the housekeeping involved in working with http exchanges. The infrastructure builds upon the rook specification and is thus well suited to be combined with httpuv based web servers.
Send Error Reports to the Google Error Reporting Service API
Send error reports to the Google Error Reporting service and view errors and assign error status in the Google Error Reporting user interface.
tapLock/R/google.R at main · ixpantia/tapLock
Seamless SSO for R applications
rcrd (record) S3 class — new_rcrd
The rcrd class extends vctr. A rcrd is composed of 1 or more fields,
which must be vectors of the same length. Is designed specifically for
classes that can naturally be decomposed into multiple vectors of the same
length, like POSIXlt, but where the organisation should be considered
an implementation detail invisible to the user (unlike a data.frame).
Dates and Times in R
Replay
r-lib/producethis: What the Package Does (One Line, Title Case)
Note the use of the /exec folder for different deployable workflows
Guest Blog: Reproducible Data Pipelines In R With {targets} - ESIP
Reproducibility is a huge challenge in science, especially as datasets grow larger and workflows become more complex. Enter targets — an R package that helps
A data workflow is the series of steps that turn raw data into something meaningful — think downloading, cleaning, analyzing and visualizing. You might already do this in R with a mix of scripts and notebooks. Some steps in your data workflow may also be manual and require no coding, such as data processing in Excel or uploading model output data to OneDrive.
A data pipeline, on the other hand, is an automated version of that workflow. It ensures that every step happens in order, only the necessary steps are rerun when data changes, and guarantees the results are reproducible every time. A well-structured pipeline ensures that anyone revisiting the analysis — including your future self — can rerun, verify and build on the work without extra effort or missing pieces.
Retrieval-Augmented Generation (RAG) Workflows
Provides tools for implementing Retrieval-Augmented Generation (RAG) workflows with Large Language Models (LLMs). Includes functions for document processing, text chunking, embedding generation, storage management, and content retrieval. Supports various document types and embedding providers (Ollama, OpenAI), with DuckDB as the default storage backend. Integrates with the ellmer package to equip chat objects with retrieval capabilities. Designed to offer both sensible defaults and customization options with transparent access to intermediate outputs.
GMH DataHub