No Clocks

No Clocks

2688 bookmarks
Custom sorting
Exploratory
Exploratory
Data Science is not just for Engineers and Statisticians. Exploratory makes it for Everyone.
·exploratory.io·
Exploratory
Developing a modern data workflow for regularly updated data
Developing a modern data workflow for regularly updated data
This Community Page article describes a data management workflow that can be readily implemented by small research teams and which solves the core challenges of managing regularly updating data. It includes a template repository and tutorial to assist others in setting up their own regularly updating data management systems.
·journals.plos.org·
Developing a modern data workflow for regularly updated data
Building a Strong Data Science Team from the Ground Up
Building a Strong Data Science Team from the Ground Up
Business is changing as a result of the increasing quantity and variety of data available. Significant new opportunities can be realized by harnessing the knowledge contained in these data - if you know where to look. A data science team can help to bring raw data through the analysis process and derive insights that …
·inwt-statistics.com·
Building a Strong Data Science Team from the Ground Up
Unlocking Blue Oceans with Data Science
Unlocking Blue Oceans with Data Science
In this article, we'll examine how Blue Oceans are created and how your organization can create Blue Oceans with Data Science too. We'll finish with a roadmap for your organization to build Blue Oceans with Data Science.
·business-science.io·
Unlocking Blue Oceans with Data Science
Shell and A.I - Steven Bucher - PSConfEU 2024
Shell and A.I - Steven Bucher - PSConfEU 2024
In this extensive lecture, I, Steven Bucher, a product manager on the PowerShell team, discuss the integration of AI into the shell environment. Over the pas...
·youtu.be·
Shell and A.I - Steven Bucher - PSConfEU 2024
Making sense out of Semi-Structured data
Making sense out of Semi-Structured data
Parsing JSON with the  Extract Nested Data component within Matillion Data Productivity Cloud connected to Snowflake simplifies the parsing for many semi-structured data patterns.   The JSON format has become a more popular format for semi-structured data, primarily because it is more consistent containing all key:value pairs. JSON handles repeating elements by containing them in an array as a value of a key:value pair.   For this article, I am using the same example data set that was used in part one on XML only this sample data is represented as JSON. I also walk you through how to convert the XML to JSON to simplify parsing XML.  Extract Nested Data  We start by using the Extract Nested Data component, which simplifies parsing semi-structured data. In this example, we’re using several of them to traverse the nested elements.   First, the JSON file is loaded into a table called donut_json, which contains a single column defined as a variant “data_value.”  Next, configure the Columns property of the Extract Nested component. I used “Autofill”’ and let the component identify the structure of the JSON. I have deselected all the columns and chosen to pass through the Item attributes and element values. In the example, I also passed through the Filling element, keeping it a variant for further processing downstream.  Since the topping elements are repeating at the first level, the component has flattened toppings into separate rows automatically, so I was able to select the element value level for toppings.   Another property to call out is the Outer join property on the Configuration tab. Since all of the elements do not exist for every item, I needed to set Outer Join = “Yes.”  This will retain all the rows for all items, even though only two items have Fillings.   Flatten Variant  The Flatten Variant component is used to flatten arrays. Although the Extract Nested Data component can sometimes be used, the Flatten Variant lets you explicitly break a column into more rows than the original extract nested data if you are seeking further granularity from the extract nested component.  The batter element in this example has two formats, so I have to treat the Batter array differently by using a Flatten Variant component to parse the array of batters into separate rows. The initial Extract Nested Data component created a new row for each item and each topping. From there, we want a new row for each item, topping and batter. I tested the batter element to determine if it’s an array, by using the IS_ARRAY() function in a Calculator component.   IS_ARRAY("items_item-element_batters_batter") After that,  Flatten the array into separate rows per batter element before extracting the attributes.   Set the Column Flatten property to read the batter array column In the column mappings, use the flatten alias to map to an output variant column   Finally, we bring all the rows back together, remove unwanted columns, and write to a new table. The Unite component unions all the rows back together The Rename component allows us to remove any unwanted fields, like the arrays, and rename and reorder the fields  The Rewrite component writes to a new table   The resulting final pipeline is much simpler than the previous XML one.   Convert XML to JSON Our example pipeline started with a file that was already in a JSON format.  However, if you have an XML file that needs to be converted and you would like to convert the XML to JSON inside a pipeline, you’ll use the code below.  Create an Orchestration Pipeline First, I created a separate Orchestration pipeline that contains a SQL Script component to create a Snowflake UDF using the code below. This code calls a Snowflake Snowpark package called “xmltodict.”  Our example XML_to_JSON Python code follows. Parse With the Calculator Component Next in my Transformation pipeline, I called the procedure in a Calculator component. The parse_json function formats the JSON so it’s readable.  Normalizing Semi-Structured Data  Semi-structured files typically contain data
Semi-structured files typically contain data that has been nested, and we often want to store that data in a structured format more friendly to analytics and reporting. Many times, as we flatten out deeply nested data, we end up with a multi-join or cartesian join where all upper-level elements of the file are joined with all nested elements of the file.
real-world examples are often very large when flattened. In these cases, we need to evaluate the data contained in the JSON response and determine the best model to represent the data in different tables.
In order to split the dimensions into separate tables, the first Extract Nested Data component will pass the full element as a variant downstream in order to start to split out the different datasets into separate streams.
·matillion.com·
Making sense out of Semi-Structured data
AI Database Generator
AI Database Generator
AI Database Generator is a sophisticated tool that utilizes artificial intelligence and machine learning algorithms to automate the design and creation of database schemas.
·databasesample.com·
AI Database Generator
Rentometer: Rentometer API Docs
Rentometer: Rentometer API Docs
Get a quick rent estimate by address or zip code with Rentometer. Compare rental rates and comps to ensure you're pricing your property right.
·rentometer.com·
Rentometer: Rentometer API Docs
autodb: Automatic Database Normalisation for Data Frames
autodb: Automatic Database Normalisation for Data Frames
Automatic normalisation of a data frame to third normal form, with the intention of easing the process of data cleaning. (Usage to design your actual database for you is not advised.) Originally inspired by the 'AutoNormalize' library for 'Python' by 'Alteryx' (<a href="https://github.com/alteryx/autonormalize" target="_top"https://github.com/alteryx/autonormalize/a>), with various changes and improvements. Automatic discovery of functional or approximate dependencies, normalisation based on those, and plotting of the resulting "database" via 'Graphviz', with options to exclude some attributes at discovery time, or remove discovered dependencies at normalisation time.
·cran.r-project.org·
autodb: Automatic Database Normalisation for Data Frames
Access, retrieve, and work with CMHC data.
Access, retrieve, and work with CMHC data.
Wrapper around the Canadian Mortgage and Housing Corporation (CMHC) web interface. It enables programmatic and reproducible access to a wide variety of housing data from CMHC.
·mountainmath.github.io·
Access, retrieve, and work with CMHC data.
HelloData - Full Product Demo (6-3-2024)
HelloData - Full Product Demo (6-3-2024)
Power your multifamily rent surveys with real-time data on over 25M units nationwide, sourced entirely from property websites and public data sources.
·youtu.be·
HelloData - Full Product Demo (6-3-2024)
PostgreSQL Foreign Key
PostgreSQL Foreign Key
In this tutorial, you will learn about PostgreSQL foreign key and how to add foreign keys to tables using foreign key constraints.
The following illustrates a foreign key constraint syntax: [CONSTRAINT fk_name] FOREIGN KEY(fk_columns) REFERENCES parent_table(parent_key_columns) [ON DELETE delete_action] [ON UPDATE update_action]
In this syntax: First, specify the name for the foreign key constraint after the CONSTRAINT keyword. The CONSTRAINT clause is optional. If you omit it, PostgreSQL will assign an auto-generated name. Second, specify one or more foreign key columns in parentheses after the FOREIGN KEY keywords. Third, specify the parent table and parent key columns referenced by the foreign key columns in the REFERENCES clause. Finally, specify the desired delete and update actions in the ON DELETE and ON UPDATE clauses.
Since the primary key is rarely updated, the ON UPDATE action is infrequently used in practice. We’ll focus on the ON DELETE action.
PostgreSQL supports the following actions: SET NULL SET DEFAULT RESTRICT NO ACTION CASCADE
·neon.tech·
PostgreSQL Foreign Key
PostgreSQL Copy Table: A Step-by-Step Guide
PostgreSQL Copy Table: A Step-by-Step Guide
In this tutorial, you will learn how to copy an existing table to a new one using various PostgreSQL copy table statements.
To copy a table completely, including both table structure and data, you use the following statement: CREATE TABLE new_table AS TABLE existing_table;
·neon.tech·
PostgreSQL Copy Table: A Step-by-Step Guide