APIs

APIs

92 bookmarks
Custom sorting
Getting Creative with OpenAPI - by David Biesack
Getting Creative with OpenAPI - by David Biesack
Defining resource creation and update operations with OpenAPI
unevaluatedProperties: false
This article pulls in lots of aggregated knowledge from earlier articles in the Language of API Design series: Mapping a domain model to OpenAPI Patterns for clean URLs and URL structure Defining the request body for a POST operation Defining API responses with OpenAPI response objects and response body data format with JSON Schema Composing JSON Schemas to keep the OpenAPI definition DRY Understanding the subtleties of JSON schema (notably, unevaluatedProperties) Defining reusable response objects to keep the OpenAPI definition DRY Defining how the API handles problems with client requests as well as server errors
·apidesignmatters.substack.com·
Getting Creative with OpenAPI - by David Biesack
Validating API Requests - by David Biesack
Validating API Requests - by David Biesack
Techniques for API Request Validation
A promise of REST APIs is good decoupling of clients and services. This is achieved in part by reducing business logic as much as possible in the client application. For example, a client application may use a form for collecting information used in a POST operation to create an API resource, or to edit an existing resource before updating it with a PUT or PATCH operations. The client then maps the form fields to the properties of the operation’s request body. Clients can use front end frameworks and libraries to perform lots of low-level validation in the front end corresponding to JSON schema constraints.
However, this only covers “syntactic” or static field-level validation. Often, an API will also have business rules that the client must follow. Secure API services will enforce those business rules in the API operations
Parse the options and (JSON) request body and return a 400 Bad Request if any of the request data is malformed (i.e. does not satisfy the constraints of the operation (such as required body or required parameters) or all the JSON Schemas associated with the operation’s parameters or request body) Verify that the caller passes valid Authorization to the API call, and return 401 Unauthorized if not Verify that the caller is authorized to perform the API operation, and return a 403 Forbidden error if not. Verify the state of the application and return 409 Conflict if the operation would put the application into an inconsistent state Verify the semantics of the request body and return a 422 Unprocessable Content error if the request is incomplete, inconsistent, or otherwise invalid
One pattern is to extend the API operations with a dry run feature. A dry run is a variant of the API operation which performs all (and only) the validation performed by the full operation, but does not execute the associated behavior. As such, it will return the same 400/401/403/409/422 that the full operation would return, allowing the client to highlight invalid form data or otherwise correct the problem. The client can use dry run operation incrementally as the user fills out a form, and disable the “Save” or “Submit” or similar UI controls if there are validation errors.
One way to implement a dry run is to create a separate "validation” operation for each API operation. This has the significant disadvantage of greatly increasing the footprint (size) of the API and adding a lot of duplication.
Rather than duplicate operations to add sibling validation operations, another approach is to add a ?dryRun=true query parameter to the operations. When used, the operation can return 204 No Content if the request contains no problems. The dryRun parameter acts as a “short circuit” in the API operation. The implementation performs the full validation it would normally do before executing the desired behavior, but then stops before actually executing anything other than the validation.
This pattern has a small impact on the API footprint compared to making sibling validation operations. A smaller footprint makes the API easier to read and understand. It is also a good use of the DRY principal, since you do not have to duplicate the definition of all the operation request parameters and request bodies, which opens up the chance for them to become out of sync.
·apidesignmatters.substack.com·
Validating API Requests - by David Biesack
Leave a calling card - by David Biesack
Leave a calling card - by David Biesack
Make API responses more self-descriptive with reference objects
In OAS, a reference object contains a $ref value and may contain summary or description values. However, there is a kernel of usefulness here that can be used outside of defining an API with OpenAPI — we can use a construct inspired by OAS reference objects instead of terse and cryptic Restful JSON URLs or naked resource ID properties.
Such reference objects in API responses (or requests) can include other key identifying data about the referenced resource; these are optional and informative. The id or url of the resource are required to actually identify the referenced resource. This makes the overall JSON payload more self-descriptive and does not send the developer down ratholes trying to understand data they see when exploring and learning the API.
To make these reference objects even more useful, the API service can build them for you instead of making the client construct them. For example, the response from the getUniverse operation can contain a reference or ref property which is the reference object that the client can embed in other requests when citing a universe. The *Item schema in the API’s list responses (see “What am I getting out of this?”) can also include the reference object.
·apidesignmatters.substack.com·
Leave a calling card - by David Biesack
Composing API Models with JSON Schema
Composing API Models with JSON Schema
Use JSON Schema effectively to build real API request and response bodies
This approach of defining tiny schemas, such as the chainLinkContentField, is a useful design pattern for keeping API definitions DRY. This is analogous to creating small, atomic, highly reusable functions in a programming language, rather than long, complex single-purpose functions. If you find yourself repeating a property definition across multiple schemas, consider lifting that property into its own field definition schema. If there are groups of related properties that you use together in multiple schemas, you can group them together into a reusable {group}Fields schema (with a descriptive schema name that indicates its purpose, such as mutableChainLinkFields), then mix them into other schemas using the allOf schema composition construct. This practice is one form of refactoring that is useful when defining and refining schemas.
Composition is not Inheritance
Warning: this JSON Schema composition construct is not inheritance… even if some tools translate this into inheritance in a target language. This means that, as the “allOf” name implies, all the schema constraints apply: you cannot override or replace constraints.
authorId: description: >- The ID of the author resource: who created this chain link. type: string minLength: 4 maxLength: 48 pattern: ^[-_a-zA-Z0-9:+$]{4,48}$ readOnly: true
Suppose we want to use schema composition but loosen those constraints in the chainLink schema’s authorId to allow 2 to 64 characters, we can express this as follows:
chainLink: title: A Chain Link allOf: - $ref: '#/components/schemas/chainLinkItem' - type: object properties: authorId: description: >- The ID of the author resource: who created this chain link. type: string minLength: 2 maxLength: 64 pattern: ^[-_a-zA-Z0-9:+$]{2,64}$ readOnly: false
However, we won’t get the desired effect. While the code generation may not change, the schema validation of a JSON object containing an authorId still enforces all of the subschema constraints. Thus, the effective schema constraints for the authorId property are the union of all the constraints: type: string minLength: 4 maxLength: 48 pattern: ^[-_a-zA-Z0-9:+$]{4,48}$ type: string minLength: 2 maxLength: 64 pattern: ^[-_a-zA-Z0-9:+$]{2,64}$
While a value such as “C289D6F7-6B30-4788-9C70-4274730FAFCA” satisfies all 8 of these constraints, a shorter author ID string of 3 characters, “A00”, will fail the constraint #2 and #4, and thus that JSON would be rejected.
A final note on schema composition. The behavior of the allOf construct is well defined within JSON Schema for JSON validation. However, there is no standard for how this JSON schema construct must be treated in other tools such as code generation for client or server SDKs. Thus, each tool vendor (SmartBear’s swagger-codegen, the OSS openapi-generator, ApiMatic, Speakeasy, Fern, Kiota, etc.) may interpret JSON Schema in individual and different ways. It is useful to try them out to see if their interpretation and implementation meets your needs and expectations.
JSON Schema provides other keywords for schema composition (the oneOf, anyOf, and not keywords, as well as conditionals with a if/then/else construct) but these are more advanced topics and I’ve already asked too much of you to read this far. Stay tuned, however, there is more to come!
·apidesignmatters.substack.com·
Composing API Models with JSON Schema
Learn - OpenAPI Spec
Learn - OpenAPI Spec
OpenAPI helps speed up API development. You Define, mock, and test REST APIs using a single truth/specification. Ideal for dev and QA teams adopting contract-first workflows.
Deeply nested schemas can become unwieldy and hard to maintain. For instance, a User object containing an Address object, which in turn contains a Location object, can quickly become complex. Why it matters: Simplifying schemas enhances readability and maintainability, making it easier for both developers and consumers to understand and work with the API.
Defining schemas, parameters, and responses inline repeatedly instead of using the components section leads to redundancy and potential inconsistencies. Why it matters: Leveraging components promotes reusability and consistency across the API specification.
Logically group your APIs into smaller, domain-specific specs — like auth.yaml, payment.yaml, orders.yaml. Use tags in OpenAPI to group related endpoints (like Order, Customer, Admin) even within a single file if needed.
/openapi ├── auth.yaml ├── customer.yaml ├── orders.yaml └── components/ └── common-schemas.yaml
·beeceptor.com·
Learn - OpenAPI Spec
Best API Documentation Tools | Beeceptor
Best API Documentation Tools | Beeceptor
Find the detailed review with tabular comparison of modern API documentation tools for creating developer friendly API documentation, helping developers integrate APIs seamlessly and efficiently.
Start with Real-World Examples: Show complete request/response cycles, including auth headers and errors. For example, show what a GET /users call returns in JSON.
Modern Tools​ Redocly: Offers Redoc for API docs plus additional tools like Revel for flexible branding without rigid templates, Reef for API monitoring, and an API registry to manage multiple OpenAPI definitions. Theneo: Uses AI to generate API references automatically, cutting down manual work. Started with docs but now expands its AI tools to streamline the whole API development process. Stoplight: Helps with API design and governance through tools like Stoplight Studio (visual OpenAPI editor), Prism (open-source HTTP mock server), and Spectral (linting tool to enforce API standards). Works best for teams taking a design-first approach. Postman: Goes beyond basic docs by providing workspaces, automated testing, monitoring, and governance. Connects deeply with GitHub, GitLab, and CI/CD pipelines as a complete API lifecycle tool. SwaggerHub: Lets teams collaborate on APIs using OpenAPI or AsyncAPI specs while managing them throughout their lifecycle. Built by the same team behind Swagger. Zuplo: Runs as a lightweight, fully-managed API platform built for developers, featuring GitOps, quick deployment, and unlimited preview environments. Works for both hobbyists and engineering leaders implementing auth systems. Gravitee.io: Works as an open-source API management platform with a full ecosystem including API Gateway, Management, Access Management, and observability tools. APIdog: Simplifies creating, managing, and testing APIs for both developers and testers. Focuses on building reliable, secure, and fast APIs. ReadMe: Helps companies create, manage, and publish API docs through a user-friendly interface that makes documentation accessible to users. dapperdox.io: Generates and serves open-source API docs for OpenAPI Swagger specs. Combines specs with docs, guides, and diagrams using GitHub Flavored Markdown. Docusaurus: Built by Meta as an open-source documentation platform. Lets developers write in Markdown or MDX and, being React-based, allows extensive customization. Scalar: Turns OpenAPI specs into interactive docs with an integrated API playground for testing endpoints directly in the documentation.
·beeceptor.com·
Best API Documentation Tools | Beeceptor
API Errors | Beeceptor
API Errors | Beeceptor
This guide to help build great API experience for the consumers. Discover top errors, their fixes, from authorization to data handling, and enhance the developer experience!
Example of an Actionable API Error Message​ { "status_code": 400, "error": "Bad Request", "message": "The 'email' field is missing or invalid.", "suggestion": "Please provide a valid email address in the format 'user@example.com'.", "error_code": "VALIDATION_FAILED_001", "trace_id": "xyz1234abcd" } This message clearly identifies: The nature of the error (400 Bad Request) The specific problem (missing or invalid email field) A suggested fix (provide a valid email in the correct format) An internal error code for reference A trace ID for further investigation
·beeceptor.com·
API Errors | Beeceptor
OpenAPI Spec Generator - AI Prompt
OpenAPI Spec Generator - AI Prompt
Generates OpenAPI 3.1.0 specifications from diverse inputs. Free Programming & Code prompt for ChatGPT, Gemini, and Claude.
·docsbot.ai·
OpenAPI Spec Generator - AI Prompt
MCP.Link | Connect APIs to AI Assistants
MCP.Link | Connect APIs to AI Assistants
Transform OpenAPI specifications into Model-Context-Protocol Protocol (MCP) endpoints for seamless AI integration.
·mcp-link.vercel.app·
MCP.Link | Connect APIs to AI Assistants
Solutions - ScraperAPI
Solutions - ScraperAPI
Get access to these core solutions with ScraperAPI, and take your web scraping efforts to the next level. Read more about each solution.
·scraperapi.com·
Solutions - ScraperAPI
What is a Real Estate API? A Guide to Real Estate Data Integration
What is a Real Estate API? A Guide to Real Estate Data Integration
A real estate API (Application Programming Interface) is a digital tool that enables developers to access and integrate comprehensive real estate data into applications, websites, or services. These APIs provide real-time, on-demand information on properties and parcels.
·realestateapi.com·
What is a Real Estate API? A Guide to Real Estate Data Integration
RealEstateAPI | Public APIs | Postman API Network
RealEstateAPI | Public APIs | Postman API Network
Explore public APIs from RealEstateAPI, exclusively on the Postman API Network. Find everything you need to quickly get started with RealEstateAPI APIs.
·postman.com·
RealEstateAPI | Public APIs | Postman API Network
RealEstateAPI Developer Documentation
RealEstateAPI Developer Documentation
THE Property Data Solution. Our revolutionary tech allows us to get you property and owner data (and lots of it!) faster and cheaper than you've ever been able to before. Slow or buggy applications due to unreliable third party data APIs are a problem of the past.
·developer.realestateapi.com·
RealEstateAPI Developer Documentation
AI Model & API Providers Analysis | Artificial Analysis
AI Model & API Providers Analysis | Artificial Analysis
Comparison and analysis of AI models and API hosting providers. Independent benchmarks across key performance metrics including quality, price, output speed & latency.
·artificialanalysis.ai·
AI Model & API Providers Analysis | Artificial Analysis
Sparrow API Platform
Sparrow API Platform
Sparrow is your one-stop API testing solution. Supercharge your API workflow with Sparrow—the ultimate ally for agile teams and individual devs. Test, organize, and share APIs with finesse, revolutionizing your API game.
·sparrowapp.dev·
Sparrow API Platform
RealEstateAPI Developer Documentation
RealEstateAPI Developer Documentation
THE Property Data Solution. Our revolutionary tech allows us to get you property and owner data (and lots of it!) faster and cheaper than you've ever been able to before. Slow or buggy applications due to unreliable third party data APIs are a problem of the past.
·developer.realestateapi.com·
RealEstateAPI Developer Documentation
An introduction to function calling and tool use
An introduction to function calling and tool use
In this blog post, we’ll explore how AI Models Are Learning to Do Instead of Just Say. We will explain how function calling works, its real-world applications, and how you can implement it using tools like Ollama and Llama 3.2. Whether you’re a developer looking to build AI-powered applications or simply curious about how AI is transforming the way we interact with APIs, this guide will walk you through everything you need to know.
·apideck.com·
An introduction to function calling and tool use
Godspeed Systems
Godspeed Systems
The benefits of schema driven development and single source of truth for microservices or API or event systems w.r.t productivity, maintainability & agility
This article focuses on Schema Driven Development (SDD) and Single Source of Truth (STT) paradigms as two first principles every team must follow. It is an essential read for CTOs, tech leaders and every aspiring 10X engineer out there. While I will touch on SDD mainly, I will talk in brief also about the 8 practices I believe are essential, and why we need them. Later in the blog you will see pratical examples of SDD and STT with screenshots and code snippets as applicable.
What is Schema Driven Development? SDD is about using a single schema definition as the single source of truth, and letting that determine or generate everything else that depends on the schema. For ex. generating CRUD APIs for multiple kinds of event sources and protocols, doing input/output validations in producer and consumer, generating API documentation & Postman collection, starting mock servers and parallel development, generating basic test cases. And as well - sigh of relief that changing in one place will reflect change everywhere else automatically (Single Source of Truth)
SDD helps to speedily kickstart and smoothly manage parallel development across teams, without writing a single custom line of code by hand . It is not only useful for kickstarting the project, but also seamlessly upgrading along with the source schema updates. For ex. If you have a database schema, you can generate CRUD API, Swagger, Postman, Test cases, Graphql API, API clients etc. from the source database schema. Can you imagine the effort and errors saved in this approach? Hint: I once worked in a team of three backend engineers who for three months, only wrote CRUD APIs, validations, documentation and didn't get time to write test cases. We used to share Postman collection over emails.
What are the signs that your team doesn't use SDD? Such teams don't have an "official" source schema. They manually create and manage dependent schemas, APIs, test cases, documentation, API clients etc. as independent activities (while they should be dependent on the source schema). For ex. They handcraft Postman collections and share over email. They handcraft the CRUD APIs for their Graphql, REST, gRpc services.
In this approach you will have Multiple sources of Truth (your DB schema, the user.schema.js file maintained separately, the Express routes & middlewares maintained separately, the Swagger and Postman collections maintained separately, the test cases maintained separately and the API clients created separately. So much redundant effort and increased chances of mistakes! Coupling of schema with code, with event sources setup (Express, Graphql etc). Non-reusability of the effort already done. Lack of standardisation and maintainability - Also every developer may implement this differently based on their style or preference of coding. This means more chaos, inefficiencies and mistakes! And also difficulty to switch between developers.
You will be Writing repetitive validation code in your event source controllers, middleware and clients Creating boilerplatefor authentication & authorisation Manually creating Swagger specs & Postman collection (and maintaining often varying versions across developers and teams, shared across emails) Manually creating CRUD APIs (for database access) Manually writing integration test cases Manually creating API clients
Whether we listen on (sync or async) events, query a database, call an API or return data from our sync event calls (http, graphql, grpc etc) - in all such cases you will be witnessing Redundant effort in maintaining SST derivatives & shipping upgrades Gaps in API, documentation, test cases, client versions Increased work means increase in the probability of errors by 10X Increased work means increased areas to look into when errors happen (like finding needle in haystack) - Imagine wrong data flowing from one microservice to another, and breaking things across a distributed system! You would need to look across all to identify the source of error.
When not following SST, there is no real source of truth This means whenever a particular API has a new field or changed schema, we need to make manual change in five places - service, client(s), service, swagger, postman collection, integration test cases. What if the developer forgets to update the shared Postman collection? Or write validation for the new field in the APIs? Do you now see how versions and shared API collections can often get out of sync without a single source of truth? Can you imagine the risk, chaos, bugs and inefficiencies this can now bring? Before we resume back to studying more about SDD and SST, lets have a quick detour to first understand some basic best practices which I believe are critically important for tech orgs, and why they are important?
The 8 best practices In upcoming articles we will touch upon these 8 best practices. Schema Driven Development & Single Source of Truth (topic of this post) Configure Over Code Security & compliance Decoupled (Modular) Architecture Shift Left Approach Essential coding practices Efficient SDLC: Issue management, documentation, test automation, code reviews, productivity measurement, source control and version management Observability for fast resolution
Why should you care about ensuring best practices? As a tech leader should your main focus be limited to hustling together an MVP and taking it to market? But MVP is just a small first step of a long journey. This journey includes multiple iterations for finding PMF, and then growth, optimisation and sustainability. There you face dynamic and unpredictable situations like changing teams, customer needs, new compliance, new competition etc. Given this, should you lay your foundation keeping in mind the future as well? For ex. maintainability, agility, quality, democratisation & avoiding risks?
·godspeed.systems·
Godspeed Systems
Creating OpenAPI from HTTP Traffic
Creating OpenAPI from HTTP Traffic
Around this time of year we're thinking about things we're going to do differently, new practices we've been putting off for too long, and mistakes we want to avoid continuing into another year. For many of us in the API world, that is going to be switching to API Design-first, using standards like OpenAPI to plan and prototype the API long before any code is written. More organizations are switching to API Design-first with OpenAPI, thanks to huge efforts from tooling vendors - from the bigge
·apisyouwonthate.com·
Creating OpenAPI from HTTP Traffic
The OpenAPI Specification Explained
The OpenAPI Specification Explained
For API designers and writers wishing formalize their API in an OpenAPI Description document.
The OpenAPI Specification Explained The OpenAPI Specification is the ultimate source of knowledge regarding this API description format. However, its length is daunting to newcomers and makes it hard for experienced users to find specific bits of information. This chapter provides a soft landing for readers not yet familiar with OpenAPI and is organized by topic, simplifying browsing. The following pages introduce the syntax and structure of an OpenAPI Description (OAD), its main building blocks and a minimal API description. Afterwards, the different blocks are detailed, starting from the most common and progressing towards advanced ones. Structure of an OpenAPI Description: JSON, YAML, openapi and info API Endpoints: paths and responses. Content of Message Bodies: content and schema. Parameters and Payload of an Operation: parameters and requestBody. Reusing Descriptions: components and $ref. Providing Documentation and Examples: description and example/examples. API Servers: servers.
·learn.openapis.org·
The OpenAPI Specification Explained
Overlays
Overlays
For API designers and writers wishing formalize their API in an OpenAPI Description document.
Introduction to OpenAPI Overlay Specification The Overlay Specification defines a document format for information that transforms an existing OpenAPI description yet remains separate from the OpenAPI description’s source document(s). The Overlay Specification defines a mechanism for providing consistent, deterministic updates to a given OpenAPI description, as an aid to automation throughout the API lifecycle. An Overlay can be applied to an OpenAPI description, resulting in an updated OpenAPI description. OpenAPI + Overlays = (better) OpenAPI One Overlay might be specific to one OpenAPI description, or general enough to be used with multiple OpenAPI descriptions. Equally, one OpenAPI description pipeline might apply different Overlays during the workflow.
Use cases for Overlays Overlays support a range of scenarios, including: Translating documentation into another language Providing configuration information for different deployment environments Allowing separation of concerns for metadata such as gateway configuration or SLA information Supporting a traits-like capability for applying a set of configuration data, such as multiple parameters or multiple headers, for targeted objects Providing default responses or parameters where they were not explicitly provided Applying configuration data globally or based on filter conditions
Resources for working with Overlays The GitHub repository for Overlays is the main hub of activity on the Overlays project. Check the issues and pull requests for what is currently in progress, and the discussions for details of future ideas and our regular meetings. The project maintains a list of tools for working with Overlays.
·learn.openapis.org·
Overlays