Found 1810 bookmarks
Newest
Best Practices
Best Practices
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Keep a Single Source of Truth Regardless of your design approach (design-first or code-first) always keep a single source of truth, i.e., information should not be duplicated in different places. It is really the same concept used in programming, where repeated code should be moved to a common function.
Otherwise, eventually one of the places will be updated while the other won’t, leading to headaches… in the best of cases. For instance, it is also commonplace to use code annotations to generate an OpenAPI description and then commit the latter to source control while the former still lingers in the code. As a result, newcomers to the project will not know which one is actually in use and mistakes will be made. Alternatively, you can use a Continuous Integration test to ensure that the two sources stay consistent.
Add OpenAPI Descriptions to Source Control OpenAPI Descriptions are not just a documentation artifact: they are first-class source files which can drive a great number of automated processes, including boilerplate generation, unit testing and documentation rendering. As such, OADs should be committed to source control, and, in fact, they should be among the first files to be committed. From there, they should also participate in Continuous Integration processes.
Make the OpenAPI Descriptions Available to the Users Beautifully-rendered documentation can be very useful for the users of an API, but sometimes they might want to access the source OAD. For instance, to use tools to generate client code for them, or to build automated bindings for some language. Therefore, making the OAD available to the users is an added bonus for them. The documents that make up the OAD can even be made available through the same API to allow runtime discovery.
There is Seldom Need to Write OpenAPI Descriptions by Hand Since OADs are plain text documents, in an easy-to-read format (be it JSON or YAML), API designers are usually tempted to write them by hand. While there is nothing stopping you from doing this, and, in fact, hand-written API descriptions are usually the most terse and efficient, approaching any big project by such method is highly impractical. Instead, you should try the other existing creation methods and choose the one that better suits you and your team (No YAML or JSON knowledge needed!):
OpenAPI Editors: Be it text editors or GUI editors they usually take care of repetitive tasks, allow you to keep a library of reusable components and provide real-time preview of the generated documentation.
Domain-Specific Languages: As its name indicates, DSL’s are API description languages tailored to specific development fields. A tool is then used to produce the OpenAPI Description. A new language has to be learned, but, in return, extremely concise descriptions can be achieved.
Code Annotations: Most programming languages allow you to annotate the code, be it with specific syntax or with general code comments. These annotations, for example, can be used to extend a method signature with information regarding the API endpoint and HTTP method that lead to it. A tool can then parse the code annotations and generate OADs automatically. This method fits very nicely with the code-first approach, so keep in mind the first advice given at the top of this page when using it (Use a Design-First Approach)…
A Mix of All the Above: It’s perfectly possible to create the bulk of an OpenAPI Description using an editor or DSL and then hand-tune the resulting file. Just be aware of the second advice above (Keep a Single Source of Truth): Once you modify a file it becomes the source of truth and the previous one should be discarded (maybe keep it as backup, but out of the sight and reach of children and newcomers to the project).
Describing Large APIs
Do not repeat yourself (The DRY principle). If the same piece of YAML or JSON appears more than once in the document, it’s time to move it to the components section and reference it from other places using $ref (See Reusing Descriptions. Not only will the resulting document be smaller but it will also be much easier to maintain). Components can be referenced from other documents, so you can even reuse them across different API descriptions!
Split the description into several documents: Smaller files are easier to navigate, but too many of them are equally taxing. The key lies somewhere in the middle. A good rule of thumb is to use the natural hierarchy present in URLs to build your directory structure. For example, put all routes starting with /users (like /users and /users/{id}) in the same file (think of it as a “sub-API”). Bear in mind that some tools might have issues with large files, whereas some other tools might not handle too many files gracefully. The solution will have to take your toolkit into account.
Use tags to keep things organized: Tags have not been described in the Specification chapter, but they can help you arrange your operations and find them faster. A tag is simply a piece of metadata (a unique name and an optional description) that you can attach to operations. Tools, specially GUI editors, can then sort all your API’s operation by their tags to help you keep them organized.
Links to External Best Practices There’s quite a bit of literature about how to organize your API more efficiently. Make sure you check out how other people solved the same issues you are facing now! For example: The API Stylebook contains internal API Design Guidelines shared with the community by some well known companies and government agencies.
Best Practices This page contains general pieces of advice which do not strictly belong to the Specification Explained chapter because they are not directly tied to the OpenAPI Specification (OAS). However, they greatly simplify creating and maintaining OpenAPI Descriptions (OADs), so they are worth keeping in mind.
Use a Design-First Approach Traditionally, two main approaches exist when creating OADs: Code-first and Design-first. In the Code-first approach, the API is first implemented in code, and then its description is created from it, using code comments, code annotations or simply written from scratch. This approach does not require developers to learn another language so it is usually regarded as the easiest one. Conversely, in Design-first, the API description is written first and then the code follows. The first obvious advantages are that the code already has a skeleton upon which to build, and that some tools can provide boilerplate code automatically. There have been a number of heated debates over the relative merits of these two approaches but, in the opinion of the OpenAPI Initiative (OAI), the importance of using Design-first cannot be stressed strongly enough.
The reason is simple: The number of APIs that can be created in code is far superior to what can be described in OpenAPI. To emphasize: OpenAPI is not capable of describing every possible HTTP API, it has limitations. Therefore, unless these descriptive limitations are perfectly known and taken into account when coding the API, they will rear their ugly head later on when trying to create an OpenAPI description for it. At that point, the right fix will be to change the code so that it uses an API which can be actually described with OpenAPI (or switch to Design-first altogether). Sometimes, however, since it is late in the process, it will be preferred to twist the API description so that it matches more or less the actual API. It goes without saying that this leads to unintuitive and incomplete descriptions, that will rarely scale in the future. Finally, there exist a number of validation tools that can verify that the implemented code adheres to the OpenAPI description. Running these tools as part of a Continuous Integration process allows changing the OpenAPI Description with peace of mind, since deviations in the code behavior will be promptly detected. Bottom line: OpenAPI opens the door to a wealth of automated tools. Make sure you use them!
·learn.openapis.org·
Best Practices
Fast JSON, NDJSON and GeoJSON Parser and Generator
Fast JSON, NDJSON and GeoJSON Parser and Generator
A fast JSON parser, generator and validator which converts JSON, NDJSON (Newline Delimited JSON) and GeoJSON (Geographic JSON) data to/from R objects. The standard R data types are supported (e.g. logical, numeric, integer) with configurable handling of NULL and NA values. Data frames, atomic vectors and lists are all supported as data containers translated to/from JSON. GeoJSON data is read in as simple features objects. This implementation wraps the yyjson C library which is available from .
·coolbutuseless.github.io·
Fast JSON, NDJSON and GeoJSON Parser and Generator
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
In the digital era, the widespread use of APIs is evident. However, scalable utilization of APIs poses a challenge due to structure divergence observed in online API documentation. This underscores the need for automat…
·ar5iv.labs.arxiv.org·
SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models
AI Database Design Flowchart Generator
AI Database Design Flowchart Generator
Unlock efficient database design with our AI-powered Database Design Flowchart Generator! Experience fast, accurate, and intuitive creation of complex database schemas. Save time, reduce errors, and streamline your workflow — start designing smarter today!
·taskade.com·
AI Database Design Flowchart Generator
Continue
Continue
Amplified developers, AI-enhanced development · The leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside the IDE
·continue.dev·
Continue
Open Policy Agent
Open Policy Agent
Policy-based control for cloud native environments
·openpolicyagent.org·
Open Policy Agent
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
The Software Architecture Canvas is a collaborative technique for elaborating the software architecture playground of a software initiative. With this canvas, you can work efficiently, iteratively, and in a time-saving manner on the software architecture of your software products as a team sport.
·workingsoftware.dev·
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
Architecture Principles: An approach to effective decision making in software architecture
Architecture Principles: An approach to effective decision making in software architecture
Are you a software architect and often find it difficult to make architecture decisions in your team? This article shows you how to use architecture principles to make effective decisions in your team.
A declarative statement made with the intention of guiding architectural design decisions in order to achieve one or more qualities of a system.
If we take a closer look at this definition, we find several interesting parts in this definition. "[...] intention of guiding architectural design decisions [...]" As a software architect or a team of software engineers, you have to deal with and decide on many architecture issues. But how do you decide these questions? Gut feeling? :-) That's is probably not the right approach. As we learn from the Software Architecture Canvas, there are quality goals that are drivers of architecture.
What are the basic characteristics of good architecture principles?
Comprehensible & clear
Architectural principles should be like marketing slogans.
Testable The principle should be verifiable, whether work is done according to the principle and where exceptions are made.
Atomic The principle requires no further context or knowledge to be understood. In summary, architectural principles should be written to enable teams to make decisions: they're clear, provide decision support, and are atomic.
What are the pitfalls of creating architecture principles? What do you think about the following principle 👇? "All software should be written in a scalable manner."
That's why we've adopted in a product team the following architecture principle. "Use cloud services if being lock-in to a particular cloud provider is acceptable."
Whether this vendor lock-in is acceptable depends on several criteria: The effort required to replace this managed service An acceptable lead time for providing alternatives. Let's take a look at an example technological decision we had to make in the past: We needed to evaluate a centralised identity and access management solution for our SaaS products. In addition to meeting the functional requirements, we had two powerful IAM solutions on the shortlist: Keycloak (self-hosted) Auth0 (Managed, cloud service)
Following the defined principle of "Use cloud services if being lock-in to a particular cloud provider is acceptable." we have concluded that a centralised IAM system should be self-managed and not managed by a third-party provider because it's a huge effort to replace a managed IAM product and therefore there is no reasonable lead time to deploy an alternative. In summary, vendor locking wasn't acceptable to us in this case. So this principle efficiently guides us to the right decision.
Example 2: "Prefer standard data formats over third-party and custom formats"
The next principle was about the selection of protocols for service communication. "Prefer standard data formats over third-party and custom formats"
If you have multiple services that need to communicate with each other, the question of protocol and format arises. In the protocol ecosystem there is a fairly new kid on the block: gRPC gRPC (gRPC Remote Procedure Calls) is a cross-platform, open-source, high-performance protocol for remote procedure calls. gRPC was originally developed by Google to connect a large number of microservices. So in our team, the question is: RESTful HTTP vs. gRPC?
The selection of a protocol thus depends heavily on the quality and change scenarios of the services involved. But if you can meet the quality goals and underlying requirements with both options, like RESTful HTTP vs. gRPC, then consider yourself lucky to have such a principle. This principle helped us choose RESTful HTTP over gRPC because RESTful HTTP is a widely accepted standard data format, while gRPC is more of a third-party format. So here this principle speeds up our decision making, which doesn't mean that we don't rely on gRPC in certain cases.
Software architecture may be changing in the way it's practiced, but it's more important than ever.
·workingsoftware.dev·
Architecture Principles: An approach to effective decision making in software architecture
Best Practices for Coding with AI in 2024
Best Practices for Coding with AI in 2024
Learn what steps developers who are using AI coding tools must take in order to ensure the quality and security of their AI-generated code.
·blog.codacy.com·
Best Practices for Coding with AI in 2024
Intro | Plandex Docs
Intro | Plandex Docs
Plandex is an open source, terminal-based AI coding engine that helps you work on complex, real-world development tasks with LLMs.
·docs.plandex.ai·
Intro | Plandex Docs
Turn HTTP Traffic into OpenAPI with Optic
Turn HTTP Traffic into OpenAPI with Optic
Capture real HTTP traffic from production or anywhere else, and create OpenAPI from it, for documentation, mocks, SDKs, or contract testing.
·apisyouwonthate.com·
Turn HTTP Traffic into OpenAPI with Optic
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
In the rapidly evolving field of artificial intelligence, developers are inundated with frameworks and tools promising to simplify the…
Introducing Atomic Agents
Atomic Agents is an open-source framework designed to be as lightweight, modular, and composable as possible. It embraces the principles of the Input–Process–Output (IPO) model and atomicity, ensuring that every component is single-purpose, reusable, and interchangeable.
Why Does Atomic Agents Exist? Atomic Agents was born out of the necessity to address the shortcomings of existing frameworks. It aims to: Streamline AI development by providing clear, manageable components. Eliminate redundant complexity and unnecessary abstractions that plague other frameworks. Promote flexibility and consistency, allowing developers to focus on building effective AI applications rather than wrestling with the framework itself. Encourage best practices by gently nudging developers toward modular, maintainable code structures.
The Programming Paradigms Behind Atomic Agents
The Input–Process–Output (IPO) Model At the core of Atomic Agents lies the Input–Process–Output (IPO) model, a fundamental programming paradigm that structures programs into three distinct phases: Input: Data is received from the user or another system. Process: The data is manipulated or transformed. Output: The processed data is presented as a result. This model promotes clarity and simplicity, making it easier to understand and manage the flow of data through an application.
In Atomic Agents, this translates to: Input Schemas: Define the structure and validation rules for incoming data using Pydantic. Processing Components: Agents and tools perform operations on the data. Output Schemas: Ensure that the results are structured and validated before being returned.
Atomicity: Building Blocks of Functionality The concept of atomicity involves breaking down complex systems into their smallest functional parts, or “atoms.” Each atom: Has a single responsibility, making it easier to understand and maintain. Is reusable, allowing for components to be used across different parts of an application or even in different projects. Can be combined with other atoms to build more complex functionalities. By focusing on atomic components, Atomic Agents promotes a modular architecture that enhances flexibility and scalability.
The Anatomy of an Agent In Atomic Agents, an AI agent is composed of several key components: System Prompt: Defines the agent’s behavior and purpose. Input Schema: Specifies the expected structure of input data. Output Schema: Defines the structure of the output data. Memory: Stores conversation history or state information. Context Providers: Inject dynamic context into the system prompt at runtime. Tools: External functions or APIs the agent can utilize. Each component is designed to be modular and interchangeable, adhering to the principles of separation of concerns and single responsibility.
Modularity and Composability Modularity is at the heart of Atomic Agents. By designing components to be self-contained and focused on a single task, developers can: Swap out tools or agents without affecting the rest of the system. Fine-tune individual components, such as system prompts or schemas, without unintended side effects. Chain agents and tools seamlessly by aligning their input and output schemas.
Chaining Schemas and Agents Atomic Agents simplifies the process of chaining agents and tools by aligning their input and output schemas. Example: Suppose you have a query generation agent and a web search tool. By setting the output schema of the query agent to match the input schema of the search tool, you can directly chain them.
Why Atomic Agents Is Better Than the Rest Eliminating Unnecessary Complexity Unlike frameworks that introduce multiple layers of abstraction, Atomic Agents keeps things straightforward. Each component serves a clear purpose, and there’s no hidden magic to decipher. Transparent Architecture: You have full visibility into how data flows through your application. Easier Debugging: With less complexity, identifying and fixing issues becomes more manageable. Reduced Learning Curve: Developers can get up to speed quickly without needing to understand convoluted abstractions.
Standalone and Reusable Components Each part of Atomic Agents can be run independently, promoting reusability and modularity. Testable in Isolation: Components can be individually tested, ensuring reliability before integration. Reusable Across Projects: Atomic components can be used in different applications, saving development time. Easier Maintenance: Isolating functionality reduces the impact of changes and simplifies updates.
Built by Developers, for Developers Atomic Agents is designed with real-world development challenges in mind. It embraces time-tested programming paradigms and prioritizes developer experience. Solid Programming Foundations: By following the IPO model and atomicity, the framework encourages best practices. Flexibility and Control: Developers have the freedom to customize and extend components as needed. Community-Driven: As an open-source project, it invites contributions and collaboration from the developer community.
The Atomic Assembler CLI: Managing Tools Made Easy
One of the standout features of Atomic Agents is the Atomic Assembler CLI, a command-line tool that simplifies the management of tools and agents.
Manually download the tools or copy/paste their source code from the Atomic Agents GitHub repository and place them in the atomic-forge folder.
The option we will use, the Atomic Assembler CLI to download the tools.
Key Features Download and Manage Tools: Easily add new tools to your project without manual copying or dependency issues. Avoid Dependency Clutter: Install only the tools you need, keeping your project lean. Modify Tools Effortlessly: Each tool is self-contained with its own tests and documentation. Access Tools Directly: If you prefer, you can manage tools manually by accessing their folders.
·generativeai.pub·
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API
Elevate, Enhance, and Empower your apps with Assistants APIs and Tools
What’s an OpenAI Assistant? Think of it as a software glue that affords you to gel together agent-like capabilities in your applications to conduct tasks expressed as instructions in natural language to an Assistant. Able to understand instructions, it can leverage OpenAI’s SOTA models and tools to carry out tasks. With Assistants stateful API, you can create Assistants within your application, providing you access to three types of supported tools: Code Interpreter, Retrieval, and Function calling [5]. At the core it has few concepts and components that cogently interact together, to enable agent-like capabilities.
Assistants API, concepts, components, and tools Unfortunately, OpenAI documentation falls short in explaining or illustrating these components into finer details and showing how they work together. Randy Michak of Empowerment AI does a fine job of dissecting these core components and illustrating their flow and data interactions [7]. Inspired by Michak, I mildly modified Figure 4, showing dynamic interaction and data flowing among Assistants API components.
To get started with Assistants, the OpenAI guide stipulates four simple steps to use the Assistants API to glue together these core components for coordination [8]. Step 1: Create an Assistant, to declare a custom model and provide instructions for the Assistant. This helps the Assistant to elect the appropriate supported tool to employ. Step 2: Create a Thread, a stateful session for the Assistant to retrieve messages from and add Assistant messages to. Step 3: Use the Thread as a conversational session to add messages for the assistants to consume. Step 4: Run the Assistant on a newly added Thread message to trigger responses. This run is Assistant’s asynchronous runtime environment.
How does it all work together?
Let’s methodically walk through a simple example where we want to accomplish the following: Integrate Assistants API, using Retrieval tool, to a) upload a couple of pdf documents and b) use an Assistant to query the contents of the document. Consider this as a mini Retrieval Augmented Generation (RAG) application. Use Files objects to upload the pdf files so that the Assistant can access them. Create and employ the Assistant, Threads, Messages, and Run objects to query the uploaded pdf documents. Coordinate all these concrete objects to interact and interplay together as part of my application.
Step 1: Create File objects as our knowledge base Upload your PDFs in the retrievers’ database, using a File object. The Assistants API breaks them into parts, as chunks, and saves them, as indexes and vector embeddings. When you ask a question, Retrievers find the best matches and help the Assistant give you a detailed answer, just like a big RAG retriever.
Step 2: Create an Assistant object. To use an Assistant and conduct tasks, first, create an AI Assistant object. Supply the Assistant with a model, instructional behavior, tools to use, and file IDs to employ for its knowledge base, as parameters.
·ai.gopubby.com·
Crafting Intelligent User Experiences: A Deep Dive into OpenAI Assistants API