Found 272 bookmarks
Newest
dbdiagram Public API | dbdiagram Docs
dbdiagram Public API | dbdiagram Docs
# Introduction ***API access is currently in Beta and only available if you have a paid plan.*** Using these APIs, you are able to programmatically work with dbdiagram. For example: - You can programmatically CRUD the diagram. - Generate an [embed link](https://docs.dbdiagram.io/embedding) for a specific diagram. This is useful especially if you need to attach the diagram into your documents, blogs and websites. # Authorization - API tokens are managed at the [workspace](https://docs.dbdiagram.io/workspaces) level, granting access to all diagrams within the workspace. - Workspace owners can generate new tokens via the "API Tokens" tab in the workspace window. - API tokens should be securely held within the user's environment to avoid leaking the key. # Errors HTTP Code Description 200 - OK Everything worked as expected. 400 - Bad Request The request was unacceptable due to missing request parameter or wrong request structure. 401 - Unauthorized No valid API key provided. 403 - Forbidden The API Key owner does not have permission to perform the request. 404 - Not Found The requested resource does not exist or cannot found. 429 - Too Many Requests Too many requests were sent. 500 - Internal Error Something went wrong on dbdiagram side (rarely). # Rate-Limiting ## Overview To prevent DDoS attacks, every API request should go through our rate limit layer which will throttle the request if a user exceeds limit quotas. The rate limit is based on user and endpoint, quotas (per time frame) which maybe different for each endpoint are divided by levels as the table below: Level Quota Note Level 1 120 requests / minute At least every API request is this level Level 2 60 requests / minute Request requires a lot of resource Level 3 20 requests / minute Request that heavily affect our server's resources ## Return Header And Status Code If a request is blocked because of it exceed the limit quota, status code is set to **429: Too Many Requests**. Every API request's response header contains the following fields: - **RateLimit-Limit**: *your_limit_quota_of_endpoint* - **RateLimit-Remaining**: *remaining_requests_until_reset* - **RateLimit-Reset**: *next_reset_time* - **Retry-After**: *next_reset_time(only available when status code is 429)*
·docs.dbdiagram.io·
dbdiagram Public API | dbdiagram Docs
PostgreSQL Identity Column
PostgreSQL Identity Column
This tutorial shows you how to use the GENERATED AS IDENTITY constraint to create the PostgreSQL identity column for a table.
PostgreSQL version 10 introduced a new constraint GENERATED AS IDENTITY that allows you to automatically assign a unique number to a column.
The GENERATED AS IDENTITY constraint is the SQL standard-conforming variant of the good old SERIAL column.
The following illustrates the syntax of the GENERATED AS IDENTITY constraint: column_name type GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY[ ( sequence_option ) ]
In this syntax: The type can be SMALLINT, INT, or BIGINT. The GENERATED ALWAYS instructs PostgreSQL to always generate a value for the identity column. If you attempt to insert (or update) values into the GENERATED ALWAYS AS IDENTITY column, PostgreSQL will issue an error. The GENERATED BY DEFAULT instructs PostgreSQL to generate a value for the identity column. However, if you supply a value for insert or update, PostgreSQL will use that value to insert into the identity column instead of using the system-generated value.
PostgreSQL allows a table to have more than one identity column. Like the SERIAL, the GENERATED AS IDENTITY constraint also uses the SEQUENCE object internally.
To fix the error, you can use the OVERRIDING SYSTEM VALUE clause as follows: INSERT INTO color (color_id, color_name) OVERRIDING SYSTEM VALUE VALUES(2, 'Green');
Alternatively, you can use GENERATED BY DEFAULT AS IDENTITY instead.
Because the GENERATED AS IDENTITY constraint uses the SEQUENCE object, you can specify the sequence options for the system-generated values.
For example, you can specify the starting value and the increment as follows: DROP TABLE color; CREATE TABLE color ( color_id INT GENERATED BY DEFAULT AS IDENTITY (START WITH 10 INCREMENT BY 10), color_name VARCHAR NOT NULL );
In this example, the system-generated value for the color_id column starts with 10 and the increment value is also 10.
·neon.tech·
PostgreSQL Identity Column
Format SQL Queries
Format SQL Queries
A convenient interface for formatting SQL queries directly within R. It acts as a wrapper around the sql_format Rust crate. The package allows you to format SQL code with customizable options, including indentation, case formatting, and more, ensuring your SQL queries are clean, readable, and consistent.
·dataupsurge.github.io·
Format SQL Queries
Documenting functions
Documenting functions
The basics of roxygen2 tags and how to use them for documenting functions.
Examples @examples provides executable R code showing how to use the function in practice. This is a very important part of the documentation because many people look at the examples before reading anything. Example code must work without errors as it is run automatically as part of R CMD check. For the purpose of illustration, it’s often useful to include code that causes an error. You can do this by wrapping the code in try() or using \dontrun{} to exclude from the executed example code. For finer control, you can use @examplesIf: #' @examplesIf interactive() #' browseURL("https://roxygen2.r-lib.org")
Instead of including examples directly in the documentation, you can put them in separate files and use @example path/relative/to/package/root to insert them into the documentation. All functions must have examples for initial CRAN submission.
·roxygen2.r-lib.org·
Documenting functions
AI-Powered Development: A Practical Guide for Software Engineers
AI-Powered Development: A Practical Guide for Software Engineers
Artificial Intelligence (AI) is no longer a distant future technology; it’s here and it’s reshaping software engineering. Tools like GitHub Copilot and ChatGPT are accelerating the development…
Impact of AI on Developer Productivity: Faster development cycles: Code suggestions and automation reduce time spent on repetitive tasks. Improved code quality: AI tools identify bugs or security risks that may go unnoticed by manual reviews. Enhanced learning: Engineers can receive real-time feedback or even ask AI for code explanations to learn new patterns or frameworks.
GitHub Copilot is a game-changer for writing code. Powered by OpenAI’s Codex model, Copilot suggests lines of code based on the context of what you’re writing. It’s especially useful when you’re working with repetitive tasks or writing boilerplate code.
ChatGPT, an AI chatbot developed by OpenAI, is not just a tool for casual conversations. It can be used to ask technical questions, explain difficult code, or even generate ideas for solving specific coding problems. Developers often use it for quick consultations — whether it’s about debugging or understanding the intricacies of a particular algorithm.
AI-Assisted System Architecture and Design As AI becomes more sophisticated, it may start to play a role in designing system architectures. Currently, system design is one of the more complex tasks that engineers handle, requiring a deep understanding of the trade-offs between different architectural patterns (monolithic vs. microservices, synchronous vs. asynchronous communication, etc.). Future AI tools could help design optimal architectures by analyzing the specific needs of a project, performance goals, and scalability requirements. AI could suggest which patterns, frameworks, or technologies are best suited for a given application. It could even generate architecture diagrams, API designs, or database schemas based on historical data from similar projects. This would revolutionize system design, making it faster and more accessible to engineers of all levels. While experienced architects would still be needed to make judgment calls, AI could drastically reduce the time spent on initial design phases, especially in large and complex systems.
·medium.com·
AI-Powered Development: A Practical Guide for Software Engineers
The most efficient way to manage snapshot tests in R.
The most efficient way to manage snapshot tests in R.
Use CI and Github API
Snapshot testing gets difficult when there is more than one variant of the same result. The reason why snapshot testing might be discouraging is due to the fact that snapshots will most likely fail due to environment settings. If one person runs the tests on a Mac and another on a Linux machine, the snapshots of rendered images will almost certainly be different. Comparing these snapshots will result in a failed test even though the code is correct. Add CI to the mix, and you have a hot mess.
The easiest solution is to introduce variants. Variants are versions of snapshots which were created on different environments. In {testthat} variants are stored in separate directories. You can pass a name of the variant to the variant argument of testthat::test_snapshot. If you have a Linux, set variant = "linux", if you have a Mac, set variant = "mac".
Use snapshots generated on CI as the source of truth. Don’t check in snapshots generated on your machine. Generate them on CI and download them to your machine instead.
Step 1: Archive snapshots on CI Add this step to you CI testing workflow to allow downloading generated snapshots.
- name: Archive test snapshots if: always() uses: actions/upload-artifact@v3 with: name: test-snapshots path: | tests/testthat/_snaps/**/**/*
Step 2: Detect the environment to create variants We can create a make_variant function to detect the version of the platform, as well as if we are running on CI. This way even if we use the same OS on CI and locally, we can still differentiate between snapshots generated on CI and locally.
#' tests/testthat/setup.R is_ci <- function() { isTRUE(as.logical(Sys.getenv("CI"))) } make_variant <- function(platform = shinytest2::platform_variant()) { ci <- if (is_ci()) "ci" else NULL paste(c(ci, platform), collapse = "-") } # In tests: testthat::expect_snapshot(..., variant = make_variant())
Step 3: Ignore your local snapshots Don’t check in snapshots generated on your machine. Add them to .gitignore instead. Copy tests/testthat/_snaps/linux-4.4 This way we can still generate snapshots locally to get fast feedback, but we’ll only keep a single source of truth checked in the repository. Since you don’t track changes in local snapshots, you need to regenerate them before you start making changes to see if they change. It adds some complexity to the process, but it allows to keep the number of shared snapshots in the version control minimal. Alternatively, you can keep local snapshots, but when doing code review, focus only on the ones generated on CI.
Step 4: Automate downloading snapshots from CI To update snapshots generated on CI in Github, we need to: Go to Actions. Find our workflow run. Download the test-snapshots artifact. Unpack and overwrite the local snapshots. testthat::snapshot_review() to review the changes. Commit and push the changes. This is a lot of steps. We can automate the most laborious ones with Github API.
The .download_ci_snaps function will: Get the list of artifacts in the repository identified by repo and owner. It’ll search workflows generated from the branch we’re currently on. It will download the latest artifact with the provided name (in our case its “test-snapshots”) in the repository Unzip them and overwrite the local copy of snapshots.
·jakubsob.github.io·
The most efficient way to manage snapshot tests in R.