Found 14 bookmarks
Newest
dbdiagram Public API | dbdiagram Docs
dbdiagram Public API | dbdiagram Docs
# Introduction ***API access is currently in Beta and only available if you have a paid plan.*** Using these APIs, you are able to programmatically work with dbdiagram. For example: - You can programmatically CRUD the diagram. - Generate an [embed link](https://docs.dbdiagram.io/embedding) for a specific diagram. This is useful especially if you need to attach the diagram into your documents, blogs and websites. # Authorization - API tokens are managed at the [workspace](https://docs.dbdiagram.io/workspaces) level, granting access to all diagrams within the workspace. - Workspace owners can generate new tokens via the "API Tokens" tab in the workspace window. - API tokens should be securely held within the user's environment to avoid leaking the key. # Errors HTTP Code Description 200 - OK Everything worked as expected. 400 - Bad Request The request was unacceptable due to missing request parameter or wrong request structure. 401 - Unauthorized No valid API key provided. 403 - Forbidden The API Key owner does not have permission to perform the request. 404 - Not Found The requested resource does not exist or cannot found. 429 - Too Many Requests Too many requests were sent. 500 - Internal Error Something went wrong on dbdiagram side (rarely). # Rate-Limiting ## Overview To prevent DDoS attacks, every API request should go through our rate limit layer which will throttle the request if a user exceeds limit quotas. The rate limit is based on user and endpoint, quotas (per time frame) which maybe different for each endpoint are divided by levels as the table below: Level Quota Note Level 1 120 requests / minute At least every API request is this level Level 2 60 requests / minute Request requires a lot of resource Level 3 20 requests / minute Request that heavily affect our server's resources ## Return Header And Status Code If a request is blocked because of it exceed the limit quota, status code is set to **429: Too Many Requests**. Every API request's response header contains the following fields: - **RateLimit-Limit**: *your_limit_quota_of_endpoint* - **RateLimit-Remaining**: *remaining_requests_until_reset* - **RateLimit-Reset**: *next_reset_time* - **Retry-After**: *next_reset_time(only available when status code is 429)*
·docs.dbdiagram.io·
dbdiagram Public API | dbdiagram Docs
Making sense out of Semi-Structured data
Making sense out of Semi-Structured data
Parsing JSON with the  Extract Nested Data component within Matillion Data Productivity Cloud connected to Snowflake simplifies the parsing for many semi-structured data patterns.   The JSON format has become a more popular format for semi-structured data, primarily because it is more consistent containing all key:value pairs. JSON handles repeating elements by containing them in an array as a value of a key:value pair.   For this article, I am using the same example data set that was used in part one on XML only this sample data is represented as JSON. I also walk you through how to convert the XML to JSON to simplify parsing XML.  Extract Nested Data  We start by using the Extract Nested Data component, which simplifies parsing semi-structured data. In this example, we’re using several of them to traverse the nested elements.   First, the JSON file is loaded into a table called donut_json, which contains a single column defined as a variant “data_value.”  Next, configure the Columns property of the Extract Nested component. I used “Autofill”’ and let the component identify the structure of the JSON. I have deselected all the columns and chosen to pass through the Item attributes and element values. In the example, I also passed through the Filling element, keeping it a variant for further processing downstream.  Since the topping elements are repeating at the first level, the component has flattened toppings into separate rows automatically, so I was able to select the element value level for toppings.   Another property to call out is the Outer join property on the Configuration tab. Since all of the elements do not exist for every item, I needed to set Outer Join = “Yes.”  This will retain all the rows for all items, even though only two items have Fillings.   Flatten Variant  The Flatten Variant component is used to flatten arrays. Although the Extract Nested Data component can sometimes be used, the Flatten Variant lets you explicitly break a column into more rows than the original extract nested data if you are seeking further granularity from the extract nested component.  The batter element in this example has two formats, so I have to treat the Batter array differently by using a Flatten Variant component to parse the array of batters into separate rows. The initial Extract Nested Data component created a new row for each item and each topping. From there, we want a new row for each item, topping and batter. I tested the batter element to determine if it’s an array, by using the IS_ARRAY() function in a Calculator component.   IS_ARRAY("items_item-element_batters_batter") After that,  Flatten the array into separate rows per batter element before extracting the attributes.   Set the Column Flatten property to read the batter array column In the column mappings, use the flatten alias to map to an output variant column   Finally, we bring all the rows back together, remove unwanted columns, and write to a new table. The Unite component unions all the rows back together The Rename component allows us to remove any unwanted fields, like the arrays, and rename and reorder the fields  The Rewrite component writes to a new table   The resulting final pipeline is much simpler than the previous XML one.   Convert XML to JSON Our example pipeline started with a file that was already in a JSON format.  However, if you have an XML file that needs to be converted and you would like to convert the XML to JSON inside a pipeline, you’ll use the code below.  Create an Orchestration Pipeline First, I created a separate Orchestration pipeline that contains a SQL Script component to create a Snowflake UDF using the code below. This code calls a Snowflake Snowpark package called “xmltodict.”  Our example XML_to_JSON Python code follows. Parse With the Calculator Component Next in my Transformation pipeline, I called the procedure in a Calculator component. The parse_json function formats the JSON so it’s readable.  Normalizing Semi-Structured Data  Semi-structured files typically contain data
Semi-structured files typically contain data that has been nested, and we often want to store that data in a structured format more friendly to analytics and reporting. Many times, as we flatten out deeply nested data, we end up with a multi-join or cartesian join where all upper-level elements of the file are joined with all nested elements of the file.
real-world examples are often very large when flattened. In these cases, we need to evaluate the data contained in the JSON response and determine the best model to represent the data in different tables.
In order to split the dimensions into separate tables, the first Extract Nested Data component will pass the full element as a variant downstream in order to start to split out the different datasets into separate streams.
·matillion.com·
Making sense out of Semi-Structured data
Fast JSON, NDJSON and GeoJSON Parser and Generator
Fast JSON, NDJSON and GeoJSON Parser and Generator
A fast JSON parser, generator and validator which converts JSON, NDJSON (Newline Delimited JSON) and GeoJSON (Geographic JSON) data to/from R objects. The standard R data types are supported (e.g. logical, numeric, integer) with configurable handling of NULL and NA values. Data frames, atomic vectors and lists are all supported as data containers translated to/from JSON. GeoJSON data is read in as simple features objects. This implementation wraps the yyjson C library which is available from .
·coolbutuseless.github.io·
Fast JSON, NDJSON and GeoJSON Parser and Generator
How to Wrangle JSON Data in R with jsonlite, purr and dplyr - Robot Wealth
How to Wrangle JSON Data in R with jsonlite, purr and dplyr - Robot Wealth
Working with modern APIs you will often have to wrangle with data in JSON format. This article presents some tools and recipes for working with JSON data with R in the tidyverse. We’ll use purrr::map functions to extract and transform our JSON data. And we’ll provide intuitive examples of the cross-overs and differences between purrr ... Read more
·robotwealth.com·
How to Wrangle JSON Data in R with jsonlite, purr and dplyr - Robot Wealth
R - JSON Files
R - JSON Files
R - JSON Files - JSON file stores data as text in human-readable format. Json stands for JavaScript Object Notation. R can read JSON files using the rjson package.
·tutorialspoint.com·
R - JSON Files
hendrikvanb
hendrikvanb
Working with complex, hierarchically nested JSON data in R can be a bit of a pain. In this post, I illustrate how you can convert JSON data into tidy tibbles with particular emphasis on what I’ve found to be a reasonably good, general approach for converting nested JSON into nested tibbles. I use three illustrative examples of increasing complexity to help highlight some pitfalls and build up the logic underlying the approach before applying it in the context of some real-world rock climbing competition data.
·hendrikvanb.gitlab.io·
hendrikvanb
JSON files & tidy data | The Byrd Lab
JSON files & tidy data | The Byrd Lab
My lab investigates how blood pressure can be treated more effectively. Much of that work involves the painstaking development of new concepts and research methods to move forward the state of the art. For example, our work on urinary extracellular vesicles’ mRNA as an ex vivo assay of the ligand-activated transcription factor activity of mineralocorticoid receptors is challenging, fun, and rewarding. With a lot of work from Andrea Berrido and Pradeep Gunasekaran in my lab, we have been moving the ball forward on several key projects on that front.
·byrdlab.org·
JSON files & tidy data | The Byrd Lab
Converting Nested JSON to DataFrame in R? - General - Posit Community
Converting Nested JSON to DataFrame in R? - General - Posit Community
I'm currently working on a project where I need to convert a nested JSON structure into a DataFrame using R. I'm facing some issues with the current approach, and I'd appreciate any help or guidance on how to properly handle this conversion. Json file looks like this : json_data - '{ "resourceType": "QuestionnaireResponse", "id": "example-questionnaireresponse", "questionnaire": "Questionnaire/example", "status": "completed", "subject": { "reference": "Patient/example" }, "a...
·forum.posit.co·
Converting Nested JSON to DataFrame in R? - General - Posit Community