Found 29 bookmarks
Newest
Three experiments in LLM code assist with RStudio and Positron - Tidyverse
Three experiments in LLM code assist with RStudio and Positron - Tidyverse
We've been experimenting with LLM-powered tools to streamline R data science and package development.
Twice a year, the tidyverse team sets a week aside for “spring cleaning,” bringing all of our R packages up to snuff with the most current tooling and standardizing various bits of our development process. Some of these updates can happen by calling a single function, while others are much more involved. One of those more involved updates is updating erroring code, transitioning away from base R (e.g.  stop()), rlang (e.g.  rlang::abort()), glue, and homegrown combinations of them. cli’s new syntax is easier to work with as a developer and more visually pleasing as a user.
In some cases, transitioning is almost as simple as Finding + Replacing rlang::abort() to cli::cli_abort():
# before: rlang::abort("`save_pred` can only be used if the initial results saved predictions.") # after: cli::cli_abort("{.arg save_pred} can only be used if the initial results saved predictions.")
In others, there’s a mess of ad-hoc pluralization, paste0()s, glue interpolations, and other assorted nonsense to sort through:
Thus was born clipal1, a (now-superseded) R package that allows users to select erroring code, press a keyboard shortcut, wait a moment, and watch the updated code be inlined in to the selection.
clipal was a huge boost for us in the most recent spring cleaning. Depending on the code being updated, these erroring calls used to take 30 seconds to a few minutes. With clipal, though, the model could usually get the updated code 80% or 90% of the way there in a couple seconds. Up to this point, irritated by autocomplete and frustrated by the friction of copying and pasting code and typing out the same bits of context into chats again and again, I had been relatively skeptical that LLMs could make me more productive. After using clipal for a week, though, I began to understand how seamlessly LLMs could automate the cumbersome and uninteresting parts of my work.
clipal itself is now superseded by pal, a more general solution to the problem that clipal solved. I’ve also written two additional packages like pal that solve two other classes of pal-like problems using similar tools, ensure and gander. In this post, I’ll write a bit about how I’ve used a pair of tools in three experiments that have made me much more productive as an R developer
After using clipal during our spring cleaning, I approached another spring cleaning task for the week: updating testing code. testthat 3.0.0 was released in 2020, bringing with it numerous changes that were both huge quality of life improvements for package developers and also highly breaking changes. While some of the task of converting legacy unit testing code to testthat 3e is relatively straightforward, other components can be quite tedious. Could I do the same thing for updating to testthat 3e that I did for transitioning to cli? I sloppily threw together a sister package to clipal that would convert tests for errors to snapshot tests, disentangle nested expectations, and transition from deprecated functions like ⁠expect_known_*(). ⁠(If you’re interested, the current prompt for that functionality is here.) That sister package was also a huge boost for me, but the package reused as-is almost every piece of code from clipal other than the prompt. Thus, I realized that the proper solution would provide all of this scaffolding to attach a prompt to a keyboard shortcut, but allow for an arbitrary set of prompts to help automate these wonky, cumbersome tasks.
The next week, pal was born. The pal package ships with three prompts centered on package development: the cli pal and testthat pal mentioned previously, as well as the roxygen pal, which drafts minimal roxygen documentation based on a function definition. Here’s what pal’s interface looks like now:
ensure
While deciding on the initial set of prompts that pal would include, I really wanted to include some sort of “write unit tests for this function” pal. To really address this problem, though, requires violating two of pal’s core assumptions:
All of the context that you need is in the selection and the prompt. In the case of writing unit tests, it’s actually pretty important to have other pieces of context. If a package provides some object type potato, in order to write tests for some function that takes potato as input, it’s likely very important to know how potatoes are created and the kinds of properties they have. pal’s sister package for writing unit tests, ensure, can thus “see” the rest of the file that you’re working on, as well as context from neighboring files like other .R source files, the corresponding test file, and package vignettes, to learn about how to interface with the function arguments being tested.
The LLM’s response can prefix, replace, or suffix the active selection in the same file. In the case of writing unit tests for R, the place that tests actually ought to go is in a corresponding test file in tests/testthat/. Via the RStudio API, ensure can open up the corresponding test file and write to it rather than the source file where it was triggered from.3
gander
·tidyverse.org·
Three experiments in LLM code assist with RStudio and Positron - Tidyverse
AI-Powered Development: A Practical Guide for Software Engineers
AI-Powered Development: A Practical Guide for Software Engineers
Artificial Intelligence (AI) is no longer a distant future technology; it’s here and it’s reshaping software engineering. Tools like GitHub Copilot and ChatGPT are accelerating the development…
Impact of AI on Developer Productivity: Faster development cycles: Code suggestions and automation reduce time spent on repetitive tasks. Improved code quality: AI tools identify bugs or security risks that may go unnoticed by manual reviews. Enhanced learning: Engineers can receive real-time feedback or even ask AI for code explanations to learn new patterns or frameworks.
GitHub Copilot is a game-changer for writing code. Powered by OpenAI’s Codex model, Copilot suggests lines of code based on the context of what you’re writing. It’s especially useful when you’re working with repetitive tasks or writing boilerplate code.
ChatGPT, an AI chatbot developed by OpenAI, is not just a tool for casual conversations. It can be used to ask technical questions, explain difficult code, or even generate ideas for solving specific coding problems. Developers often use it for quick consultations — whether it’s about debugging or understanding the intricacies of a particular algorithm.
AI-Assisted System Architecture and Design As AI becomes more sophisticated, it may start to play a role in designing system architectures. Currently, system design is one of the more complex tasks that engineers handle, requiring a deep understanding of the trade-offs between different architectural patterns (monolithic vs. microservices, synchronous vs. asynchronous communication, etc.). Future AI tools could help design optimal architectures by analyzing the specific needs of a project, performance goals, and scalability requirements. AI could suggest which patterns, frameworks, or technologies are best suited for a given application. It could even generate architecture diagrams, API designs, or database schemas based on historical data from similar projects. This would revolutionize system design, making it faster and more accessible to engineers of all levels. While experienced architects would still be needed to make judgment calls, AI could drastically reduce the time spent on initial design phases, especially in large and complex systems.
·medium.com·
AI-Powered Development: A Practical Guide for Software Engineers
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
Prompt Storm - A Powerful Easy to use AI Prompt Engineering Chrome Extension for ChatGPT, Google's Gemini, and Anthropic's Claude. With just a few clicks you can get the answers you're looking for, create amazing writing, marketing and social media strategies, save time and boost your productivity.
·promptstorm.app·
Prompt Storm - A Powerful Easy to use Artificial Intelligence Prompt Engineering Chrome Software Extension for ChatGPT, Google's Gemini, and Anthropic's Claude.
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
In the digital era, the widespread use of APIs is evident. However, scalable utilization of APIs poses a challenge due to structure divergence observed in online API documentation. This underscores the need for automat…
·ar5iv.labs.arxiv.org·
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Elevate, Enhance, and Empower your apps with Assistants APIs and Tools
What’s an OpenAI Assistant? Think of it as a software glue that affords you to gel together agent-like capabilities in your applications to conduct tasks expressed as instructions in natural language to an Assistant. Able to understand instructions, it can leverage OpenAI’s SOTA models and tools to carry out tasks. With Assistants stateful API, you can create Assistants within your application, providing you access to three types of supported tools: Code Interpreter, Retrieval, and Function calling [5]. At the core it has few concepts and components that cogently interact together, to enable agent-like capabilities.
Assistants API, concepts, components, and tools Unfortunately, OpenAI documentation falls short in explaining or illustrating these components into finer details and showing how they work together. Randy Michak of Empowerment AI does a fine job of dissecting these core components and illustrating their flow and data interactions [7]. Inspired by Michak, I mildly modified Figure 4, showing dynamic interaction and data flowing among Assistants API components.
To get started with Assistants, the OpenAI guide stipulates four simple steps to use the Assistants API to glue together these core components for coordination [8]. Step 1: Create an Assistant, to declare a custom model and provide instructions for the Assistant. This helps the Assistant to elect the appropriate supported tool to employ. Step 2: Create a Thread, a stateful session for the Assistant to retrieve messages from and add Assistant messages to. Step 3: Use the Thread as a conversational session to add messages for the assistants to consume. Step 4: Run the Assistant on a newly added Thread message to trigger responses. This run is Assistant’s asynchronous runtime environment.
How does it all work together?
Let’s methodically walk through a simple example where we want to accomplish the following: Integrate Assistants API, using Retrieval tool, to a) upload a couple of pdf documents and b) use an Assistant to query the contents of the document. Consider this as a mini Retrieval Augmented Generation (RAG) application. Use Files objects to upload the pdf files so that the Assistant can access them. Create and employ the Assistant, Threads, Messages, and Run objects to query the uploaded pdf documents. Coordinate all these concrete objects to interact and interplay together as part of my application.
Step 1: Create File objects as our knowledge base Upload your PDFs in the retrievers’ database, using a File object. The Assistants API breaks them into parts, as chunks, and saves them, as indexes and vector embeddings. When you ask a question, Retrievers find the best matches and help the Assistant give you a detailed answer, just like a big RAG retriever.
Step 2: Create an Assistant object. To use an Assistant and conduct tasks, first, create an AI Assistant object. Supply the Assistant with a model, instructional behavior, tools to use, and file IDs to employ for its knowledge base, as parameters.
·ai.gopubby.com·
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
LLM Beyond its Core Capabilities as AI Assistants or Agents
LLM Beyond its Core Capabilities as AI Assistants or Agents
Transform your LLM as helpful assistants with function calling
Both OpenAI programing guide and Anyscale Endpoints blog [7] distill down to simple steps: Call the model with the user query and a list of functions defined in the Chat Completions API parameter as tools. The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema. Parse the string into JSON in your code, and call your function with the provided arguments if they exist. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. Following the above simple steps, our user_content to the LLM generates three required parameters (location, latitude, longitude) as a JSON object in its response.
Examples and Use Cases of Function Calling in LLM
Apart from the above use cases mentioned in the Open AI programming guide [10], Ben Lorica visually and comprehensively captures use cases of general function calling in LLMs, including the OpenAI Assistant Tools API [11]. Lorica succinctly states that early use cases include applications such as customer service chatbots, data analysis assistants, and code generation tools. Other examples extend to creative, logistical, and operational domains: writing assistants, scheduling agents, summarizing news., etc.
·ai.gopubby.com·
LLM Beyond its Core Capabilities as AI Assistants or Agents
Prompt and empower your LLM, the tidy way
Prompt and empower your LLM, the tidy way
The tidyprompt package allows users to prompt and empower their large language models (LLMs) in a tidy way. It provides a framework to construct LLM prompts using tidyverse-inspired piping syntax, with a library of pre-built prompt wrappers and the option to build custom ones. Additionally, it supports structured LLM output extraction and validation, with automatic feedback and retries if necessary. Moreover, it enables specific LLM reasoning modes, autonomous R function calling for LLMs, and compatibility with any LLM provider.
·tjarkvandemerwe.github.io·
Prompt and empower your LLM, the tidy way
Webcrumbs
Webcrumbs
Use our Frontend AI tool to generate components from images and text. Sign up for early access to our open-source JavaScript plugin builder.
·webcrumbs.org·
Webcrumbs
Coze: Next-Gen AI Chatbot Developing Platform
Coze: Next-Gen AI Chatbot Developing Platform
Coze is a next-generation AI application and chatbot developing platform for everyone. Regardless of your programming experience, Coze enables you to effortlessly create various chatbots and deploy them across different social platforms and messaging apps.
·coze.com·
Coze: Next-Gen AI Chatbot Developing Platform
Provide me a comprehensive data model for the backend of a client portal / CRM
Provide me a comprehensive data model for the backend of a client portal / CRM
Designing a comprehensive data model for the backend of a client portal or Customer Relationship Management (CRM) system involves creating a structured database that can efficiently manage and process data related to user management, contact management, task management, sales pipeline, reporting, and interactions such as emails, calls, and transactions. This model will serve as the foundation...
·perplexity.ai·
Provide me a comprehensive data model for the backend of a client portal / CRM