Found 35 bookmarks
Newest
Structured Streaming Programming Guide - Spark 3.5.1 Documentation
Structured Streaming Programming Guide - Spark 3.5.1 Documentation
Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine.
Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees through checkpointing and Write-Ahead Logs.
Structured Streaming queries are processed using a micro-batch processing engine, which processes data streams as a series of small batch jobs thereby achieving end-to-end latencies as low as 100 milliseconds and exactly-once fault-tolerance guarantees
However, since Spark 2.3, we have introduced a new low-latency processing mode called Continuous Processing, which can achieve end-to-end latencies as low as 1 millisecond with at-least-once guarantees.
The output can be defined in a different mode
Complete Mode
Append Mode
Update Mode
The engine uses checkpointing and write-ahead logs to record the offset range of the data being processed in each trigger.
sliding event-time
We can easily define watermarking on the previous example using withWatermark() as shown below.
In other words, late data within the threshold will be aggregated, but data later than the threshold will start getting dropped (see later in the section for the exact guarantees). Let’s understand this with an example
This means the system needs to know when an old aggregate can be dropped from the in-memory state because the application is not going to receive late data for that aggregate any more.
we have introduced watermarking, which lets the engine automatically track the current event time in the data and attempt to clean up old state accordingly. You can define the watermark of a query by specifying the event time column and the threshold on how late the data is expected to be in terms of event time
Note that after every trigger, the updated counts (i.e. purple rows) are written to sink as the trigger output, as dictated by the Update mode.
·spark.apache.org·
Structured Streaming Programming Guide - Spark 3.5.1 Documentation
Add Jar to standalone pyspark
Add Jar to standalone pyspark
I'm launching a pyspark program: $ export SPARK_HOME= $ export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.9-src.zip $ python And the py code: from pyspark import SparkContext,
.config('spark.jars.packages', 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1')
·stackoverflow.com·
Add Jar to standalone pyspark
One-hot encoding in PySpark
One-hot encoding in PySpark
To perform one-hot encoding in PySpark, we must convert the categorical column into a numeric column (0, 1, ...) using StringIndexer, and then convert the numeric column into one-hot encoded columns using OneHotEncoder.
·skytowner.com·
One-hot encoding in PySpark
Shuffle join in Spark SQL
Shuffle join in Spark SQL
Shuffle consists on moving data with the same key to the one executor in order to execute some specific processing on it. We could think that it concerns only em*ByKey/em operations but it's not necessarily true.
·waitingforcode.com·
Shuffle join in Spark SQL
Getting started with MongoDB, PySpark, and Jupyter Notebook | MongoDB Blog
Getting started with MongoDB, PySpark, and Jupyter Notebook | MongoDB Blog
Learn how to leverage MongoDB data in your Jupyter notebooks via the MongoDB Spark Connector and PySpark. We will load financial security data from MongoDB, calculate a moving average, and then update the data in MongoDB with the new data.
·mongodb.com·
Getting started with MongoDB, PySpark, and Jupyter Notebook | MongoDB Blog
How to connect to remote hive server from spark
How to connect to remote hive server from spark
I'm running spark locally and want to to access Hive tables, which are located in the remote Hadoop cluster. I'm able to access the hive tables by lauching beeline under SPARK_HOME [ml@master spa...
·stackoverflow.com·
How to connect to remote hive server from spark
spark createOrReplaceTempView vs createGlobalTempView
spark createOrReplaceTempView vs createGlobalTempView
Spark Dataset 2.0 provides two functions createOrReplaceTempView and createGlobalTempView. I am not able to understand the basic difference between both functions. According to API documents:
createOrReplaceTempView() creates or replaces a local temporary view with this dataframe df. Lifetime of this view is dependent to SparkSession class
createGlobalTempView() creates a global temporary view with this dataframe df. life time of this view is dependent to spark application itself
·stackoverflow.com·
spark createOrReplaceTempView vs createGlobalTempView
Create Your Very Own Apache Spark/Hadoop Cluster....then do something with it? - Confessions of a Data Guy
Create Your Very Own Apache Spark/Hadoop Cluster....then do something with it? - Confessions of a Data Guy
I’ve never seen so many posts about Apache Spark before, not sure if it’s 3.0, or because the world is burning down. I’ve written about Spark a few times, even 2 years ago, but it still seems to be steadily increasing in popularity, albeit still missing from many companies tech stacks. With the continued rise […]
·confessionsofadataguy.com·
Create Your Very Own Apache Spark/Hadoop Cluster....then do something with it? - Confessions of a Data Guy
Spark Architecture: Shuffle
Spark Architecture: Shuffle
This is my second article about Apache Spark architecture and today I will be more specific and tell you about the shuffle, one of the most interesting topics in the overall Spark design. The previous part was mostly about general Spark architecture and its memory management. It can be accessed here. The next one is [...]
·0x0fff.com·
Spark Architecture: Shuffle
Spark Broadcast Variables - Spark by {Examples}
Spark Broadcast Variables - Spark by {Examples}
In Spark RDD and DataFrame, Broadcast variables are read-only shared variables that are cached and available on all nodes in a cluster in-order to access or use by the tasks. Instead of sending this data along with every task, spark distributes broadcast variables to the machine using efficient broadcast algorithms to reduce communication costs. Use
In Spark RDD and DataFrame, Broadcast variables are read-only shared variables that are cached and available on all nodes in a cluster in-order to access or use by the tasks. Instead of sending this data along with every task, spark distributes broadcast variables to the machine using efficient broadcast algorithms to reduce communication costs.
·sparkbyexamples.com·
Spark Broadcast Variables - Spark by {Examples}
Tuning - Spark 3.3.0 Documentation
Tuning - Spark 3.3.0 Documentation
Tuning and performance optimization guide for Spark 3.3.0
The main point to remember here is that the cost of garbage collection is proportional to the number of Java objects, so using data structures with fewer objects (e.g. an array of Ints instead of a LinkedList) greatly lowers this cost.
·spark.apache.org·
Tuning - Spark 3.3.0 Documentation
Reading Spark DAGs - DZone Java
Reading Spark DAGs - DZone Java
See how to effectively read Directed Acyclic Graphs (DAGs) in Spark to better understand the steps a program takes to complete a computation.
·dzone.com·
Reading Spark DAGs - DZone Java
Dynamic Partition Pruning in Spark 3.0 - DZone Big Data
Dynamic Partition Pruning in Spark 3.0 - DZone Big Data
This blog will give you a deep insight on Dynamic Partition Pruning used in Apache Spark and how this works in the newer version of Spark released.
Therefore, we don’t need to actually scan the full fact table as we are only interested in two filtering partitions that result from the dimension table.
To avoid this, a simple approach is to take the filter from the dimension table incorporated into a sub query. Then run that sub query below the scan on the fact table.
·dzone.com·
Dynamic Partition Pruning in Spark 3.0 - DZone Big Data