Found 2225 bookmarks
Newest
Building the Entrata KPI Scorecard
Building the Entrata KPI Scorecard
This is a description of our Entrata KPI Scorecard project to automate a scorecard showing KPIs from data in Entrata reports. RentViewer now has a connector for the Entrata API. We pulled the Entrata P&L, Box Score and Resident Retention reports from our connector for this project.
·rentviewer.com·
Building the Entrata KPI Scorecard
Intro | Plandex Docs
Intro | Plandex Docs
Plandex is an open source, terminal-based AI coding engine that helps you work on complex, real-world development tasks with LLMs.
·docs.plandex.ai·
Intro | Plandex Docs
Turn HTTP Traffic into OpenAPI with Optic
Turn HTTP Traffic into OpenAPI with Optic
Capture real HTTP traffic from production or anywhere else, and create OpenAPI from it, for documentation, mocks, SDKs, or contract testing.
·apisyouwonthate.com·
Turn HTTP Traffic into OpenAPI with Optic
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
In the rapidly evolving field of artificial intelligence, developers are inundated with frameworks and tools promising to simplify the…
Introducing Atomic Agents
Atomic Agents is an open-source framework designed to be as lightweight, modular, and composable as possible. It embraces the principles of the Input–Process–Output (IPO) model and atomicity, ensuring that every component is single-purpose, reusable, and interchangeable.
Why Does Atomic Agents Exist? Atomic Agents was born out of the necessity to address the shortcomings of existing frameworks. It aims to: Streamline AI development by providing clear, manageable components. Eliminate redundant complexity and unnecessary abstractions that plague other frameworks. Promote flexibility and consistency, allowing developers to focus on building effective AI applications rather than wrestling with the framework itself. Encourage best practices by gently nudging developers toward modular, maintainable code structures.
The Programming Paradigms Behind Atomic Agents
The Input–Process–Output (IPO) Model At the core of Atomic Agents lies the Input–Process–Output (IPO) model, a fundamental programming paradigm that structures programs into three distinct phases: Input: Data is received from the user or another system. Process: The data is manipulated or transformed. Output: The processed data is presented as a result. This model promotes clarity and simplicity, making it easier to understand and manage the flow of data through an application.
In Atomic Agents, this translates to: Input Schemas: Define the structure and validation rules for incoming data using Pydantic. Processing Components: Agents and tools perform operations on the data. Output Schemas: Ensure that the results are structured and validated before being returned.
Atomicity: Building Blocks of Functionality The concept of atomicity involves breaking down complex systems into their smallest functional parts, or “atoms.” Each atom: Has a single responsibility, making it easier to understand and maintain. Is reusable, allowing for components to be used across different parts of an application or even in different projects. Can be combined with other atoms to build more complex functionalities. By focusing on atomic components, Atomic Agents promotes a modular architecture that enhances flexibility and scalability.
The Anatomy of an Agent In Atomic Agents, an AI agent is composed of several key components: System Prompt: Defines the agent’s behavior and purpose. Input Schema: Specifies the expected structure of input data. Output Schema: Defines the structure of the output data. Memory: Stores conversation history or state information. Context Providers: Inject dynamic context into the system prompt at runtime. Tools: External functions or APIs the agent can utilize. Each component is designed to be modular and interchangeable, adhering to the principles of separation of concerns and single responsibility.
Modularity and Composability Modularity is at the heart of Atomic Agents. By designing components to be self-contained and focused on a single task, developers can: Swap out tools or agents without affecting the rest of the system. Fine-tune individual components, such as system prompts or schemas, without unintended side effects. Chain agents and tools seamlessly by aligning their input and output schemas.
Chaining Schemas and Agents Atomic Agents simplifies the process of chaining agents and tools by aligning their input and output schemas. Example: Suppose you have a query generation agent and a web search tool. By setting the output schema of the query agent to match the input schema of the search tool, you can directly chain them.
Why Atomic Agents Is Better Than the Rest Eliminating Unnecessary Complexity Unlike frameworks that introduce multiple layers of abstraction, Atomic Agents keeps things straightforward. Each component serves a clear purpose, and there’s no hidden magic to decipher. Transparent Architecture: You have full visibility into how data flows through your application. Easier Debugging: With less complexity, identifying and fixing issues becomes more manageable. Reduced Learning Curve: Developers can get up to speed quickly without needing to understand convoluted abstractions.
Standalone and Reusable Components Each part of Atomic Agents can be run independently, promoting reusability and modularity. Testable in Isolation: Components can be individually tested, ensuring reliability before integration. Reusable Across Projects: Atomic components can be used in different applications, saving development time. Easier Maintenance: Isolating functionality reduces the impact of changes and simplifies updates.
Built by Developers, for Developers Atomic Agents is designed with real-world development challenges in mind. It embraces time-tested programming paradigms and prioritizes developer experience. Solid Programming Foundations: By following the IPO model and atomicity, the framework encourages best practices. Flexibility and Control: Developers have the freedom to customize and extend components as needed. Community-Driven: As an open-source project, it invites contributions and collaboration from the developer community.
The Atomic Assembler CLI: Managing Tools Made Easy
One of the standout features of Atomic Agents is the Atomic Assembler CLI, a command-line tool that simplifies the management of tools and agents.
Manually download the tools or copy/paste their source code from the Atomic Agents GitHub repository and place them in the atomic-forge folder.
The option we will use, the Atomic Assembler CLI to download the tools.
Key Features Download and Manage Tools: Easily add new tools to your project without manual copying or dependency issues. Avoid Dependency Clutter: Install only the tools you need, keeping your project lean. Modify Tools Effortlessly: Each tool is self-contained with its own tests and documentation. Access Tools Directly: If you prefer, you can manage tools manually by accessing their folders.
·generativeai.pub·
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Elevate, Enhance, and Empower your apps with Assistants APIs and Tools
What’s an OpenAI Assistant? Think of it as a software glue that affords you to gel together agent-like capabilities in your applications to conduct tasks expressed as instructions in natural language to an Assistant. Able to understand instructions, it can leverage OpenAI’s SOTA models and tools to carry out tasks. With Assistants stateful API, you can create Assistants within your application, providing you access to three types of supported tools: Code Interpreter, Retrieval, and Function calling [5]. At the core it has few concepts and components that cogently interact together, to enable agent-like capabilities.
Assistants API, concepts, components, and tools Unfortunately, OpenAI documentation falls short in explaining or illustrating these components into finer details and showing how they work together. Randy Michak of Empowerment AI does a fine job of dissecting these core components and illustrating their flow and data interactions [7]. Inspired by Michak, I mildly modified Figure 4, showing dynamic interaction and data flowing among Assistants API components.
To get started with Assistants, the OpenAI guide stipulates four simple steps to use the Assistants API to glue together these core components for coordination [8]. Step 1: Create an Assistant, to declare a custom model and provide instructions for the Assistant. This helps the Assistant to elect the appropriate supported tool to employ. Step 2: Create a Thread, a stateful session for the Assistant to retrieve messages from and add Assistant messages to. Step 3: Use the Thread as a conversational session to add messages for the assistants to consume. Step 4: Run the Assistant on a newly added Thread message to trigger responses. This run is Assistant’s asynchronous runtime environment.
How does it all work together?
Let’s methodically walk through a simple example where we want to accomplish the following: Integrate Assistants API, using Retrieval tool, to a) upload a couple of pdf documents and b) use an Assistant to query the contents of the document. Consider this as a mini Retrieval Augmented Generation (RAG) application. Use Files objects to upload the pdf files so that the Assistant can access them. Create and employ the Assistant, Threads, Messages, and Run objects to query the uploaded pdf documents. Coordinate all these concrete objects to interact and interplay together as part of my application.
Step 1: Create File objects as our knowledge base Upload your PDFs in the retrievers’ database, using a File object. The Assistants API breaks them into parts, as chunks, and saves them, as indexes and vector embeddings. When you ask a question, Retrievers find the best matches and help the Assistant give you a detailed answer, just like a big RAG retriever.
Step 2: Create an Assistant object. To use an Assistant and conduct tasks, first, create an AI Assistant object. Supply the Assistant with a model, instructional behavior, tools to use, and file IDs to employ for its knowledge base, as parameters.
·ai.gopubby.com·
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
LLM Beyond its Core Capabilities as AI Assistants or Agents
LLM Beyond its Core Capabilities as AI Assistants or Agents
Transform your LLM as helpful assistants with function calling
Both OpenAI programing guide and Anyscale Endpoints blog [7] distill down to simple steps: Call the model with the user query and a list of functions defined in the Chat Completions API parameter as tools. The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema. Parse the string into JSON in your code, and call your function with the provided arguments if they exist. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. Following the above simple steps, our user_content to the LLM generates three required parameters (location, latitude, longitude) as a JSON object in its response.
Examples and Use Cases of Function Calling in LLM
Apart from the above use cases mentioned in the Open AI programming guide [10], Ben Lorica visually and comprehensively captures use cases of general function calling in LLMs, including the OpenAI Assistant Tools API [11]. Lorica succinctly states that early use cases include applications such as customer service chatbots, data analysis assistants, and code generation tools. Other examples extend to creative, logistical, and operational domains: writing assistants, scheduling agents, summarizing news., etc.
·ai.gopubby.com·
LLM Beyond its Core Capabilities as AI Assistants or Agents
OpenAI Platform - Assistants API
OpenAI Platform - Assistants API
Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.
An Assistant represents an entity that can be configured to respond to a user's messages using several parameters like model, instructions, and tools.
·platform.openai.com·
OpenAI Platform - Assistants API
Prompt and empower your LLM, the tidy way
Prompt and empower your LLM, the tidy way
The tidyprompt package allows users to prompt and empower their large language models (LLMs) in a tidy way. It provides a framework to construct LLM prompts using tidyverse-inspired piping syntax, with a library of pre-built prompt wrappers and the option to build custom ones. Additionally, it supports structured LLM output extraction and validation, with automatic feedback and retries if necessary. Moreover, it enables specific LLM reasoning modes, autonomous R function calling for LLMs, and compatibility with any LLM provider.
·tjarkvandemerwe.github.io·
Prompt and empower your LLM, the tidy way
REST API in R with plumber
REST API in R with plumber
API and R Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package.
Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package
# When an API is started it might take some time to initialize # this function stops the main execution and wait until # plumber API is ready to take queries. wait_for_api <- function(log_path, timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) if(any(grepl(readLines(log_path), pattern = "Running plumber API"))) { return(invisible()) } } stop("Waiting timed!") }
Oh, in some examples I am using redis. So, before you dive in, make sure to fire up a simple redis server. At the end of the script, I’ll be turning redis off, so you don’t want to be using it for anything else at the same time. I just want to remind you that this code isn’t meant to be run on a production server.
redis is launched in a background, , so you might want to wait a little bit to make sure it’s fully up and running before moving on.
wait_for_redis <- function(timeout = 60, check_every = 1) { times <- timeout / check_every for(i in seq_len(times)) { Sys.sleep(check_every) status <- suppressWarnings(system2("redis-cli", "PING", stdout = TRUE, stderr = TRUE) == "PONG") if(status) { return(invisible()) } } stop("Redis waiting timed!") }
First off, let’s talk about logging. I try to log as much as possible, especially in critical areas like database accesses, and interactions with other systems. This way, if there’s an issue in the future (and trust me, there will be), I should be able to diagnose the problem just by looking at the logs alone. Logging is like “print debugging” (putting print(“I am here”), print(“I am here 2”) everywhere), but done ahead of time. I always try to think about what information might be needed to make a correct diagnosis, so logging variable values is a must. The logger and glue packages are your best friends in that area.
Next, it might also be useful to add a unique request identifier ((I am doing that in setuuid filter)) to be able to track it across the whole pipeline (since a single request might be passed across many functions). You might also want to add some other identifiers, such as MACHINE_ID - your API might be deployed on many machines, so it could be helpful for diagnosing if the problem is associated with a specific instance or if it’s a global issue.
In general you shouldn’t worry too much about the size of the logs. Even if you generate ~10KB per request, it will take 100000 requests to generate 1GB. And for the plumber API, 100000 requests generated in a short time is A LOT. In such scenario you should look into other languages. And if you have that many requests, you probably have a budget for storing those logs:)
It might also be a good idea to setup some automatic system to monitor those logs (e.g. Amazon CloudWatch if you are on AWS). In my example I would definitely monitor Error when reading key from cache string. That would give me an indication of any ongoing problems with API cache.
Speaking of cache, you might use it to save a lot of resources. Caching is a very broad topic with many pitfalls (what to cache, stale cache, etc) so I won’t spend too much time on it, but you might want to read at least a little bit about it. In my example, I am using redis key-value store, which allows me to save the result for a given request, and if there is another requests that asks for the same data, I can read it from redis much faster.
Note that you could use memoise package to achieve similar thing using R only. However, redis might be useful when you are using multiple workers. Then, one cached request becomes available for all other R processes. But if you need to deploy just one process, memoise is fine, and it does not introduce another dependency - which is always a plus.
info <- function(req, ...) { do.call( log_info, c( list("MachineId: {MACHINE_ID}, ReqId: {req$request_id}"), list(...), .sep = ", " ), envir = parent.frame(1) ) }
#* Log some information about the incoming request #* https://www.rplumber.io/articles/routing-and-input.html - this is a must read! #* @filter setuuid function(req) { req$request_id <- UUIDgenerate(n = 1) plumber::forward() }
#* Log some information about the incoming request #* @filter logger function(req) { if(!grepl(req$PATH_INFO, pattern = "PATH_INFO")) { info( req, "REQUEST_METHOD: {req$REQUEST_METHOD}", "PATH_INFO: {req$PATH_INFO}", "HTTP_USER_AGENT: {req$HTTP_USER_AGENT}", "REMOTE_ADDR: {req$REMOTE_ADDR}" ) } plumber::forward() }
To run the API in background, one additional file is needed. Here I am creating it using a simple bash script.
library(plumber) library(optparse) library(uuid) library(logger) MACHINE_ID <- "MAIN_1" PORT_NUMBER <- 8761 log_level(logger::TRACE) pr("tmp/api_v1.R") %>% pr_run(port = PORT_NUMBER)
·zstat.pl·
REST API in R with plumber
SPA Mode | Remix
SPA Mode | Remix
From the beginning, Remix's opinion has always been that you own your server architecture. This is why Remix is built on top of the Web Fetch API and can run on any modern runtime via built-in or community-provided adapters. While we believe that having a server provides the best UX/Performance/SEO/etc. for most apps, it is also undeniable that there exist plenty of valid use cases for a Single Page Application in the real world:
SPA Mode is basically what you'd get if you had your own React Router + Vite setup using createBrowserRouter/RouterProvider, but along with some extra Remix goodies: File-based routing (or config-based via routes()) Automatic route-based code-splitting via route.lazy <Link prefetch> support to eagerly prefetch route modules <head> management via Remix <Meta>/<Links> APIs SPA Mode tells Remix that you do not plan on running a Remix server at runtime and that you wish to generate a static index.html file at build time and you will only use Client Data APIs for data loading and mutations. The index.html is generated from the HydrateFallback component in your root.tsx route. The initial "render" to generate the index.html will not include any routes deeper than root. This ensures that the index.html file can be served/hydrated for paths beyond / (i.e., /about) if you configure your CDN/server to do so.
·remix.run·
SPA Mode | Remix
RESTful API Design Best Practices Guide 2024
RESTful API Design Best Practices Guide 2024
Guide to RESTful API design best practices in 2024 covering resource-based architecture, stateless communication, client-server separation, URI design, HTTP method usage, security, performance optimization, and more.
·daily.dev·
RESTful API Design Best Practices Guide 2024
API Documentation Using Hacker Tools Mitmproxy2swagger
API Documentation Using Hacker Tools Mitmproxy2swagger
Discover mitmproxy2swagger: A quick solution to generate API documentation, bridging the gap between backend and frontend teams effortlessly in just 2 mins
API documentation is a collection of references, tutorials, documents, or videos that help developers use your API governed by the Open API Specification(OAS). An API(Application programming interface) is a data-sharing technique that helps applications communicate with each other. Not the best definition in the world but I like to think of an API as a dynamic messenger. They can store your message, process it, and also deliver it to multiple people. They are also responsible for the security of your message until it reaches you.
There are a lot of tools in the market used to produce great documentation; Swagger, Postman, Doxygen, ApiDoc, and Document360 just to name a few. However, most developers remain oblivious to the tools developed for reconnaissance which when you interact with them are useful to developers as well.
mitmproxy2swagger
mitmweb is a component of the mitmproxy project and it will serve to intercept the requests that will be channeled to the listener port opened at 8080
Next, we'll need to configure the requests source for which we'll use Postman
Next, click on the gear icon at the top right corner of the postman interface to access the settings
On the settings pop up select proxy and then toggle use custom proxy configuration Here we'll add the proxy listener port so that Postman can channel all request through out custom proxy from mitmproxy
·muriithigakuru.hashnode.dev·
API Documentation Using Hacker Tools Mitmproxy2swagger
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Many times when the we are trying to Pentest an API we might not get access to Swagger file or the documentations of the API, Today we will…
Many times when the we are trying to Pentest an API we might not get access to Swagger file or the documentations of the API, Today we will try to create the swagger file using Mitmweb and Postman.
Man in The Midlle Proxy (MITMweb)
run mitmweb through our command line in Kali
and as we can see it starts to listen on the port 8080 for http/https traffic, and we will make sure that its running by navigating to the above address which is the localhost at port 8081
and then we will proxy our traffic thorugh Burp Suite proxy port 8080 because we already has mitmweb listening for this port (make sure Burp is closed)
and then we will stop the capture and use mitmproxy2swagger to analyse it
·medium.com·
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Reverse engineering a Web API
Reverse engineering a Web API
Introduction Most websites or web services have an API in the backend that delivers requested data to its frontend. This can be anything from the Google Search API to delivering a message on Discord. Some people in the gaming community scan a game’s username database for certain available special names, like 3 letter names, to register them. I’ve been asked to write a tool to automate that. To do that I had to reverse engineer the R6DB API. I then could use that API to check for available usernames programmatically. This API has shut down since, likely due to abuse. The method I’m going to show also works on Electron Apps such as Discord by bringing up the DevTools. For any other app, you can use something like Fiddler to intercept the web requests.
·vollragm.github.io·
Reverse engineering a Web API
Agent Protocol
Agent Protocol
Agent Protocol - The open source communication protocol for AI agents.
·agentprotocol.ai·
Agent Protocol