02-AREAS

02-AREAS

2291 bookmarks
Newest
Highlight AI | Master your world
Highlight AI | Master your world
Get instant answers about anything you've seen, heard or said. Download free: highlightai.com
·highlightai.com·
Highlight AI | Master your world
Ref.
Ref.
Documentation for your agent.
·ref.tools·
Ref.
Agentic Engineer - Build LIVING software
Agentic Engineer - Build LIVING software
Build LIVING software. Your guide to mastering prompts, ai coding, ai agents, and agentic workflows.
·agenticengineer.com·
Agentic Engineer - Build LIVING software
shinymgr: A Framework for Building, Managing, and Stitching Shiny Modules into Reproducible Workflows
shinymgr: A Framework for Building, Managing, and Stitching Shiny Modules into Reproducible Workflows
The R package shinymgr provides a unifying framework that allows Shiny developers to create, manage, and deploy a master Shiny application comprised of one or more "apps", where an "app" is a tab-based workflow that guides end-users through a step-by-step analysis. Each tab in a given "app" consists of one or more Shiny modules. The shinymgr app builder allows developers to "stitch" Shiny modules together so that outputs from one module serve as inputs to the next, creating an analysis pipeline that is easy to implement and maintain. Apps developed using shinymgr can be incorporated into R packages or deployed on a server, where they are accessible to end-users. Users of shinymgr apps can save analyses as an RDS file that fully reproduces the analytic steps and can be ingested into an RMarkdown or Quarto report for rapid reporting. In short, developers use the shinymgr framework to write Shiny modules and seamlessly combine them into Shiny apps, and end-users of these apps can execute reproducible analyses that can be incorporated into reports for rapid dissemination. A comprehensive overview of the package is provided by 12 learnr tutorials.
·journal.r-project.org·
shinymgr: A Framework for Building, Managing, and Stitching Shiny Modules into Reproducible Workflows
Futureverse
Futureverse
A Unifying Parallelization Framework in R for Everyone
·futureverse.org·
Futureverse
Introduction
Introduction
Build production-ready Copilots and Agents effortlessly.
·docs.copilotkit.ai·
Introduction
How to make data pipelines idempotent
How to make data pipelines idempotent
Unable to find practical examples of idempotent data pipelines? Then, this post is for you. In this post, we go over a technique that you can use to make your data pipelines professional and data reprocessing a breeze.
·startdataengineering.com·
How to make data pipelines idempotent
Shell and A.I - Steven Bucher - PSConfEU 2024
Shell and A.I - Steven Bucher - PSConfEU 2024
In this extensive lecture, I, Steven Bucher, a product manager on the PowerShell team, discuss the integration of AI into the shell environment. Over the pas...
·youtu.be·
Shell and A.I - Steven Bucher - PSConfEU 2024
autodb: Automatic Database Normalisation for Data Frames
autodb: Automatic Database Normalisation for Data Frames
Automatic normalisation of a data frame to third normal form, with the intention of easing the process of data cleaning. (Usage to design your actual database for you is not advised.) Originally inspired by the 'AutoNormalize' library for 'Python' by 'Alteryx' (<a href="https://github.com/alteryx/autonormalize" target="_top"https://github.com/alteryx/autonormalize/a>), with various changes and improvements. Automatic discovery of functional or approximate dependencies, normalisation based on those, and plotting of the resulting "database" via 'Graphviz', with options to exclude some attributes at discovery time, or remove discovered dependencies at normalisation time.
·cran.r-project.org·
autodb: Automatic Database Normalisation for Data Frames
Data Pipeline Design Patterns - #1. Data flow patterns
Data Pipeline Design Patterns - #1. Data flow patterns
Data pipelines built (and added on to) without a solid foundation will suffer from poor efficiency, slow development speed, long times to triage production issues, and hard testability. What if your data pipelines are elegant and enable you to deliver features quickly? An easy-to-maintain and extendable data pipeline significantly increase developer morale, stakeholder trust, and the business bottom line! Using the correct design pattern will increase feature delivery speed and developer value (allowing devs to do more in less time), decrease toil during pipeline failures, and build trust with stakeholders. This post goes over the most commonly used data flow design patterns, what they do, when to use them, and, more importantly, when not to use them. By the end of this post, you will have an overview of the typical data flow patterns and be able to choose the right one for your use case.
·startdataengineering.com·
Data Pipeline Design Patterns - #1. Data flow patterns
Advanced Tidyverse
Advanced Tidyverse
Use piped workflows for efficient data cleaning and visualization.
·sesync-ci.github.io·
Advanced Tidyverse
Summarizing and Querying Data from Excel Spreadsheets Using eparse and a Large Language Model
Summarizing and Querying Data from Excel Spreadsheets Using eparse and a Large Language Model
Editor's Note: This post was written by Chris Pappalardo, a Senior Director at Alvarez & Marsal, a leading global professional services firm. The standard processes for building with LLM work well for documents that contain mostly text, but do not work as well for documents that contain tabular data (like spreadsheets). We wrote about our latest thinking on Q&A over csvs on the blog a couple weeks ago, and we loved reading Chris's exploration of working with csvs and LangChain using agents, chai
·blog.langchain.dev·
Summarizing and Querying Data from Excel Spreadsheets Using eparse and a Large Language Model