No Clocks

No Clocks

REST API in R with plumber
REST API in R with plumber
API and R Nowadays, it’s pretty much expected that software comes with an HTTP API interface. Every programming language out there offers a way to expose APIs or make GET/POST/PUT requests, including R. In this post, I’ll show you how to create an API using the plumber package. Plus, I’ll give you tips on how to make it more production ready - I’ll tackle scalability, statelessness, caching, and load balancing. You’ll even see how to consume your API with other tools like python, curl, and the R own httr package.
·zstat.pl·
REST API in R with plumber
SPA Mode | Remix
SPA Mode | Remix
From the beginning, Remix's opinion has always been that you own your server architecture. This is why Remix is built on top of the Web Fetch API and can run on any modern runtime via built-in or community-provided adapters. While we believe that having a server provides the best UX/Performance/SEO/etc. for most apps, it is also undeniable that there exist plenty of valid use cases for a Single Page Application in the real world:
SPA Mode is basically what you'd get if you had your own React Router + Vite setup using createBrowserRouter/RouterProvider, but along with some extra Remix goodies: File-based routing (or config-based via routes()) Automatic route-based code-splitting via route.lazy <Link prefetch> support to eagerly prefetch route modules <head> management via Remix <Meta>/<Links> APIs SPA Mode tells Remix that you do not plan on running a Remix server at runtime and that you wish to generate a static index.html file at build time and you will only use Client Data APIs for data loading and mutations. The index.html is generated from the HydrateFallback component in your root.tsx route. The initial "render" to generate the index.html will not include any routes deeper than root. This ensures that the index.html file can be served/hydrated for paths beyond / (i.e., /about) if you configure your CDN/server to do so.
·remix.run·
SPA Mode | Remix
middleware.md
middleware.md
middleware.md · GitHub
·gist.github.com·
middleware.md
RESTful API Design Best Practices Guide 2024
RESTful API Design Best Practices Guide 2024
Guide to RESTful API design best practices in 2024 covering resource-based architecture, stateless communication, client-server separation, URI design, HTTP method usage, security, performance optimization, and more.
·daily.dev·
RESTful API Design Best Practices Guide 2024
Godspeed Systems
Godspeed Systems
The benefits of schema driven development and single source of truth for microservices or API or event systems w.r.t productivity, maintainability & agility
This article focuses on Schema Driven Development (SDD) and Single Source of Truth (STT) paradigms as two first principles every team must follow. It is an essential read for CTOs, tech leaders and every aspiring 10X engineer out there. While I will touch on SDD mainly, I will talk in brief also about the 8 practices I believe are essential, and why we need them. Later in the blog you will see pratical examples of SDD and STT with screenshots and code snippets as applicable.
What is Schema Driven Development? SDD is about using a single schema definition as the single source of truth, and letting that determine or generate everything else that depends on the schema. For ex. generating CRUD APIs for multiple kinds of event sources and protocols, doing input/output validations in producer and consumer, generating API documentation & Postman collection, starting mock servers and parallel development, generating basic test cases. And as well - sigh of relief that changing in one place will reflect change everywhere else automatically (Single Source of Truth)
SDD helps to speedily kickstart and smoothly manage parallel development across teams, without writing a single custom line of code by hand . It is not only useful for kickstarting the project, but also seamlessly upgrading along with the source schema updates. For ex. If you have a database schema, you can generate CRUD API, Swagger, Postman, Test cases, Graphql API, API clients etc. from the source database schema. Can you imagine the effort and errors saved in this approach? Hint: I once worked in a team of three backend engineers who for three months, only wrote CRUD APIs, validations, documentation and didn't get time to write test cases. We used to share Postman collection over emails.
What are the signs that your team doesn't use SDD? Such teams don't have an "official" source schema. They manually create and manage dependent schemas, APIs, test cases, documentation, API clients etc. as independent activities (while they should be dependent on the source schema). For ex. They handcraft Postman collections and share over email. They handcraft the CRUD APIs for their Graphql, REST, gRpc services.
In this approach you will have Multiple sources of Truth (your DB schema, the user.schema.js file maintained separately, the Express routes & middlewares maintained separately, the Swagger and Postman collections maintained separately, the test cases maintained separately and the API clients created separately. So much redundant effort and increased chances of mistakes! Coupling of schema with code, with event sources setup (Express, Graphql etc). Non-reusability of the effort already done. Lack of standardisation and maintainability - Also every developer may implement this differently based on their style or preference of coding. This means more chaos, inefficiencies and mistakes! And also difficulty to switch between developers.
You will be Writing repetitive validation code in your event source controllers, middleware and clients Creating boilerplatefor authentication & authorisation Manually creating Swagger specs & Postman collection (and maintaining often varying versions across developers and teams, shared across emails) Manually creating CRUD APIs (for database access) Manually writing integration test cases Manually creating API clients
Whether we listen on (sync or async) events, query a database, call an API or return data from our sync event calls (http, graphql, grpc etc) - in all such cases you will be witnessing Redundant effort in maintaining SST derivatives & shipping upgrades Gaps in API, documentation, test cases, client versions Increased work means increase in the probability of errors by 10X Increased work means increased areas to look into when errors happen (like finding needle in haystack) - Imagine wrong data flowing from one microservice to another, and breaking things across a distributed system! You would need to look across all to identify the source of error.
When not following SST, there is no real source of truth This means whenever a particular API has a new field or changed schema, we need to make manual change in five places - service, client(s), service, swagger, postman collection, integration test cases. What if the developer forgets to update the shared Postman collection? Or write validation for the new field in the APIs? Do you now see how versions and shared API collections can often get out of sync without a single source of truth? Can you imagine the risk, chaos, bugs and inefficiencies this can now bring? Before we resume back to studying more about SDD and SST, lets have a quick detour to first understand some basic best practices which I believe are critically important for tech orgs, and why they are important?
The 8 best practices In upcoming articles we will touch upon these 8 best practices. Schema Driven Development & Single Source of Truth (topic of this post) Configure Over Code Security & compliance Decoupled (Modular) Architecture Shift Left Approach Essential coding practices Efficient SDLC: Issue management, documentation, test automation, code reviews, productivity measurement, source control and version management Observability for fast resolution
Why should you care about ensuring best practices? As a tech leader should your main focus be limited to hustling together an MVP and taking it to market? But MVP is just a small first step of a long journey. This journey includes multiple iterations for finding PMF, and then growth, optimisation and sustainability. There you face dynamic and unpredictable situations like changing teams, customer needs, new compliance, new competition etc. Given this, should you lay your foundation keeping in mind the future as well? For ex. maintainability, agility, quality, democratisation & avoiding risks?
·godspeed.systems·
Godspeed Systems
JSON Schema validation | Godspeed Docs
JSON Schema validation | Godspeed Docs
The Framework provides request and response schema validation
Request schema validation​ We have the ability to define inputs and their types in our request schema, such as path parameters, query parameters, and request body. This allows the framework to validate whether the API has received the specified inputs in the expected types. Whenever an API is triggered, AJV (Another JSON Schema Validator) verifies the request schema against the provided inputs. If the defined schema matches the inputs, it allows the workflow to execute. Otherwise, it throws an error with a status code of 400 and a descriptive message indicating where the schema validation failed.
Response schema validation​ Just like request schema validation, there's also response schema validation in place. In this process, the framework checks the response type, validates the properties of the response, and ensures they align with the specified types. The process of response schema validation involves storing the response schema, enabling the workflow to execute, and checking the response body along with its properties for validation. Response schema validation includes two cases Failure in Workflow Execution Successful Workflow Execution but Fails in Response Schema Validation
If the response schema validation fails api return with 500 internal server error
In the case of failed request schema validation, the APIs respond with a status of 400 and a message indicating a "bad request." Conversely, if the response schema validation encounters an issue, the APIs return a status of 500 along with an "Internal Server Error" message.
Event with response and request schema validation​ http.post./helloworld: fn: helloworld params: - name: path_params in: path required: true schema: type: string - name: query_params in: query required: true schema: type: string body: content: application/json: schema: type: object required: [name] properties: name: type: string responses: 200: content: application/json: schema: type: object required: [name] properties: name: type: string
·godspeed.systems·
JSON Schema validation | Godspeed Docs
Design Principles | Godspeed Docs
Design Principles | Godspeed Docs
Three fundamental abstractions
Schema driven data validation​ We follow Swagger spec as a standard to validate the schema of the event, whether incoming or outgoing events (HTTP), without developer having to write any code. In case of database API calls, the datastore plugin validates the arguments based on the DB model specified in Prisma format. The plugins for HTTP APIs or datastores offer validation for third-party API requests and responses, datastore queries, and incoming events based on Swagger spec or DB schema. For more intricate validation scenarios, such as conditional validation based on attributes like subject, object, environment, or payload, developers can incorporate these rules into the application logic as part of middleware or workflows.
Unified datastore model and API​ The unified model configuration and CRUD API, which includes popular SQL, NoSQL stores including Elasticgraph (a unique ORM over Elasticsearch), offer standardized interfaces to various types of datastores, whether SQL or NoSQL. Each integration adapts to the nature of the data store. The Prisma and Elasticgraph plugins provided by Godspeed expose the native functions of the client used, giving developer the freedom to use the universal syntax or native queries.
Authentication​ Authentication helps to identify who is the user, and generate their access tokens or JWT token for authorized access to the resources of the application. The framework gives developers full freedom to setup any kind of authentication. For ex. they can setup simple auth using the microservice's internal datastore. Or they can invoke an IAM service like ORY Kratos, AWS Okta, or an inhouse service. They can also add OAUTH2 authentication using different providers like Google, Microsoft, Apple, Github etc. using pre-built plugins, or import and customize an existing HTTP plugin like Express, by adding PassportJS middleware.
Authorization​ Authorization is key to security, for multi-tenant or variety of other use cases. The framework allows neat, clean and low code syntax to have a fine grained authorization in place, at the event level or workflow's task level, when querying a database or another API. Developers define authorization rules for each event or workflow task using straightforward configurations for JWT validation or RBAC/ABAC. For more complex use cases, for ex. where they query a policy engine and dynamically compute the permissions, they can write workflows or native functions to access the datasources, compute the rules on the fly, and patch the outcome of that function into a task's authz parameter. These rules encompass not only access to API endpoints but also provide fine-grained data access within datastores, for table, row and column level access. The framework allows seamless integration with third-party authorization services or ACL databases via the datasource abstraction.
Autogenerated Swagger spec​ Following the principles of Schema Driven Development, the event spec of the microservice can be used to auto-generate the Swagger spec for HTTP APIs exposed by this serice. The framework provides autogenerated Swagger documentation using CLI.
Autogenerated CRUD API​ The framework provides autogenereated CRUD APIs from database model written in Prisma format. Generated API's can be extended by the developers as per their needs. We are planning auto generate or Graphql and gRpc APIs, and may release a developer bountry for the same soon.
Environment variables and configurations​ The framework promites setting up of environment variables in a pre-defined YAML file. Though the developer can also allow access by other means via a .env file or setting them up manually. Further configurations are to be written in /config folder. These variables are accessible in Other configuration files Datasource, event source and event definitions Workflows
Log redaction​ The framework allows developer to specify the keys that may have sensitive information and should never get published in logs by mistake. There is a centralized check for such keys before a log is about to be printed.
Telemetry autoinstrumentation using OTEL​ Godspeed allows a developer to add auto-instrumentation which publishes logs, trace and APM information in OTEL standard format, supported by all major observability backends. The APM export captures not just the RAM, CPU information per node/pod/service, but also the latency information of the incoming API calls, with broken down spans giving breakup of latency across the calls to datastores or external APIs. This helps to find out exact bottlenecks. Further the logs and trace/spans are correlated to find out exactly where the error happened in a request spanning multiple microservices with each calling multiple datasources and doing internal computation. Developer can also add custom logging, span creation and BPM metrics at task level. For ex. new user registration, failed login attempt etc.
·godspeed.systems·
Design Principles | Godspeed Docs
Schema driven development | Steven Grosmark
Schema driven development | Steven Grosmark
I see two meanings of “schema-driven development” - literal and general. The literal meaning is the capitalized Schema-Driven Development (SDD) process, which applies to the development of back end services / REST APIs. The general meaning is the idea of using “schemas” as a first step of a development process.
Schema-Driven Development for APIs
In this sense, advocates state that one must start with a schema - e.g., OpenAPI (or similar) spec for REST APIs. The argument is that the schema is the contract under which all concerned parties operate, and how would any one individual know what to expect, or how to behave in various circumstances without a contract?
I love the use of the word contract here, because it evokes great analogies of other contexts where the contract (schema) is also the starting point. Most apt is the use of architectural drawings when building a home - the number of tasks and people that pivot independently around those drawings is massive, not to mention the ability to estimate time and cost with a fair degree of accuracy up front.
In the context of developing APIs, the schemas serve as the pivot point around which all other work can be done independently and concurrently. With a clear, precise, unambiguous schema, a development timeline can look like this:
Faux Schema-Driven Development Schema-Driven Development is not: JSON: examples are not schemas. Post-hoc documentation: schemas are the driver, and should never be driven by code - either lazily (by manually writing a schema after the service has been implemented), or actively (by using a generator to generate schemas from code)
Benefits of Schema-Driven Development Enables teams to work independently Encourages thinking ahead of time rather than by the seat of your pants. Solving problems ahead of time is worth 10x more than solving them in production. Reduces micro-decisions (the kind that no one bubbles up (and maybe not even aware) as they make them, but oftentimes surface later as a subtle bug). Fosters communication and collaboration - cross-team, and within teams
Guidelines for writing good schemas Start discussing data types and structures in the ideation / planning phase of a project. The first implementation task of a project should be to write the schema. A schema should be an unambiguous declaration of all data types and structures. Mark fields that will always have a value as required. Use the most specific data type available (see Data Types - note that Swagger became OpenAPI). When using a string, add more constraints whenever possible. Use a format, like date date-time, uuid, etc. Use enums when appropriate. Use a pattern if string values can follow a RegEx. For number and integer types, include min/max values. Lean into adding descriptions. At the very least, document date format & time zone expectations. Include example values.
Schema-driven development as an idea
The second meaning of schema-driven development to me is the idea of starting a project with the process of designing your data structures and semantics. This involves taking a schema-driven approach to things that are not REST APIs.
I have seen this work really well in practice. On my current team, I have adopted a process of having a 30-60 minute discussion to think about and nail down data types and/or methods whenever more than one platform is involved. Half of the time we can even do this over Slack, asynchronously. This works great, and the team really enjoys not having to revisit things because these weren’t thought about ahead of time.
Summary I place a very high value on planning - the kind of planning that is either literally Schema-Driven Development or generally embraces the idea of it. A small amount of time spent doing this kind of planning can: Save a lot of time by not having to refactor things later, because you’ve thought out the edge cases up front; Reduce technical debt, because your up-front considerations mean you don’t have to live with the consequences of poor design choices; Reduce code complexity, because you’ve made things as unambiguous as possible ahead of time; Make your team faster, because you’re not spending extra time to refactor things; Make your team happier and more collaborative, because everyone’s opinions and perspectives are important and valuable when planning.
·g-mark.com·
Schema driven development | Steven Grosmark
Schema-driven development in 2021 - 99designs
Schema-driven development in 2021 - 99designs
Schema-driven development is an important concept to know in 2021. What exactly is schema-driven development? What are the benefits of schema-driven development? We will explore the answers to these questions in this article.
·99designs.com·
Schema-driven development in 2021 - 99designs
Airparser
Airparser
Revolutionize data extraction with the GPT parser. Extract structured data from emails, PDFs, and any documents. Export the parsed data in real time to any app.
·help.airparser.com·
Airparser
apidays
apidays
·apilandscape.apiscene.io·
apidays
APIscene | Inspiring the community, one API story at a time
APIscene | Inspiring the community, one API story at a time
Dive deep into the API Economy with the API Scene. Read about API platform practices, cloud APIs, API management, and how to participate in the platform...
·apiscene.io·
APIscene | Inspiring the community, one API story at a time
WSTG - Latest | OWASP Foundation
WSTG - Latest | OWASP Foundation
WSTG - Latest on the main website for The OWASP Foundation. OWASP is a nonprofit foundation that works to improve the security of software.
·owasp.org·
WSTG - Latest | OWASP Foundation
API Documentation Using Hacker Tools Mitmproxy2swagger
API Documentation Using Hacker Tools Mitmproxy2swagger
Discover mitmproxy2swagger: A quick solution to generate API documentation, bridging the gap between backend and frontend teams effortlessly in just 2 mins
API documentation is a collection of references, tutorials, documents, or videos that help developers use your API governed by the Open API Specification(OAS). An API(Application programming interface) is a data-sharing technique that helps applications communicate with each other. Not the best definition in the world but I like to think of an API as a dynamic messenger. They can store your message, process it, and also deliver it to multiple people. They are also responsible for the security of your message until it reaches you.
There are a lot of tools in the market used to produce great documentation; Swagger, Postman, Doxygen, ApiDoc, and Document360 just to name a few. However, most developers remain oblivious to the tools developed for reconnaissance which when you interact with them are useful to developers as well.
mitmproxy2swagger
mitmweb is a component of the mitmproxy project and it will serve to intercept the requests that will be channeled to the listener port opened at 8080
Next, we'll need to configure the requests source for which we'll use Postman
Next, click on the gear icon at the top right corner of the postman interface to access the settings
On the settings pop up select proxy and then toggle use custom proxy configuration Here we'll add the proxy listener port so that Postman can channel all request through out custom proxy from mitmproxy
·muriithigakuru.hashnode.dev·
API Documentation Using Hacker Tools Mitmproxy2swagger
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Many times when the we are trying to Pentest an API we might not get access to Swagger file or the documentations of the API, Today we will…
Many times when the we are trying to Pentest an API we might not get access to Swagger file or the documentations of the API, Today we will try to create the swagger file using Mitmweb and Postman.
Man in The Midlle Proxy (MITMweb)
run mitmweb through our command line in Kali
and as we can see it starts to listen on the port 8080 for http/https traffic, and we will make sure that its running by navigating to the above address which is the localhost at port 8081
and then we will proxy our traffic thorugh Burp Suite proxy port 8080 because we already has mitmweb listening for this port (make sure Burp is closed)
and then we will stop the capture and use mitmproxy2swagger to analyse it
·medium.com·
Reverse Engineer an API using MITMWEB and POSTMAN and create a Swagger file (crAPI)
Reverse engineering a Web API
Reverse engineering a Web API
Introduction Most websites or web services have an API in the backend that delivers requested data to its frontend. This can be anything from the Google Search API to delivering a message on Discord. Some people in the gaming community scan a game’s username database for certain available special names, like 3 letter names, to register them. I’ve been asked to write a tool to automate that. To do that I had to reverse engineer the R6DB API. I then could use that API to check for available usernames programmatically. This API has shut down since, likely due to abuse. The method I’m going to show also works on Electron Apps such as Discord by bringing up the DevTools. For any other app, you can use something like Fiddler to intercept the web requests.
·vollragm.github.io·
Reverse engineering a Web API
Package-Wide Variables/Cache in R Packages | R-bloggers
Package-Wide Variables/Cache in R Packages | R-bloggers
It’s often beneficial to have a variable shared between all the functions in an R package. One obvious example would be the maintenance of a package-wide cache for all of your functions. I’ve encountered this situation multiple times and always forget at least one important step in the process, so I thought I’d document it [...]
·r-bloggers.com·
Package-Wide Variables/Cache in R Packages | R-bloggers