Skip to main content

What is Shuffling in Spark

Shuffling in Spark is a mechanism that Re-Distributes the data across different executors or workers in the clusters. 

Why do we need to Re-Distribute the data? 

 A) Re-Distribution is needed when there is a need of increasing or decreasing the data partitions in the situations below:

  • When the partitions are not sufficient enough to process the data load in the cluster
  • When the partitions are too high in numbers that it creates task scheduling overhead and it becomes the bottleneck in the processing time.
Re-Distribution can also be achieved by executing the shuffling on existing distributed data collection like RDD, DataFrames, etc by using the "Repartition" and "Coalesce" APIs in Spark.

B) During Aggregation and Joins on data collection in Spark, all the data records belonging to aggregation or join should reside in the single partition and when the existing partitioning scheme doesn't satisfy this condition there is a need to re-distributing the data in input collections before performing the aggregations or joins.

Spark Shuffling is an expensive process as it is moving around data among different executors or workers in the cluster. Imagine, if you have 1000s of workers and have a huge volume of data, in this case, the Spark Shuffling can lead to expensive overheads.

Generally, Spark Shuffling involves
  • Disk I/O
  • Network I/O
  • Data Serialization & Deserialization
Some of the operations that might lead to the Shuffling are 

  • join
  • cogroup
  • groupWith
  • join
  • leftOuterJoin
  • rightOuterJoin
  • groupByKey
  • reduceByKey
  • repartition
  • coalesce
Data Engineers should always consider the possibility where they can avoid Shuffling in their process. For Example, they can choose to use reduceByKey instead of groupByKey due to the following reasons.

  • groupByKey shuffles all the data and therefore it is too slow.
  • reduceByKey shuffles only the sub-aggregation results in the partitions.

To summarize, Shuffling is one of the important operations of Spark that re-distributes the data to make the pipelines more efficient. But we are also aware of how expensive Shuffling could be especially when the number of workers in a cluster is too high or the volume of data is huge. So one should avoid the Shuffling in their pipelines. Also at the same time, one should also understand the concept clearly that could help them to design Fault-tolerant, Robust, and Reliable Data Pipelines.

Comments

Popular posts from this blog

How to Backfill the Data in Airflow

In Apache Airflow, backfilling is the process of running a DAG or a subset of its tasks for a specific date range in the past. This can be useful if you need to fill in missing data, or if you want to re-run a DAG for a specific period of time to test or debug it. Here are the steps to backfill a DAG in Airflow: Navigate to the Airflow web UI and select the DAG that you want to backfill. In the DAG detail view, click on the "Graph View" tab. Click on the "Backfill" button in the top right corner of the page. In the "Backfill Job" form that appears, specify the date range that you want to backfill. You can use the "From" and "To" fields to set the start and end dates, or you can use the "Last X" field to backfill a certain number of days. Optional: If you want to backfill only a subset of the tasks in the DAG, you can use the "Task Instances" field to specify a comma-separated list of task IDs. Click on the "Star...

How to use Cloud Function and Cloud Pub Sub to process data in real-time

Cloud Functions is a fully-managed, serverless platform provided by Google Cloud that allows you to execute code in response to events. Cloud Pub/Sub is a messaging service that allows you to send and receive messages between services. You can use Cloud Functions and Cloud Pub/Sub together to build event-driven architectures that can process data in real-time. Here is a high-level overview of how to use Cloud Functions with Cloud Pub/Sub: Create a Cloud Pub/Sub topic: The first step is to create a Cloud Pub/Sub topic that you will use to send and receive messages. You can do this using the Cloud Console, the Cloud Pub/Sub API, or the gcloud command-line tool. Create a Cloud Function: Next, you will need to create a Cloud Function that will be triggered by the Cloud Pub/Sub topic. You can create a Cloud Function using the Cloud Console, the Cloud Functions API, or the gcloud command-line tool. When you create a Cloud Function, you will need to specify the trigger type (in this case, C...

How to migrate data from on-premise Postgres to Google Cloud

There are several ways to move data from an on-premise PostgreSQL database to Google Cloud. Here are three common approaches: Use a Cloud Data Integration Tool: Google Cloud offers several tools that can help you move data from an on-premise PostgreSQL database to the cloud. For example, Cloud Data Fusion is a fully-managed, cloud-native data integration platform that can help you build, execute, and monitor data pipelines between various data sources and destinations, including PostgreSQL and Google Cloud. You can use Cloud Data Fusion to extract data from your on-premise PostgreSQL database, transform the data as needed, and load the data into a cloud-based data store, such as BigQuery or Cloud SQL. Use a Command-Line Tool: Another option is to use a command-line tool, such as pg_dump or pg_dumpall, to extract the data from your on-premise PostgreSQL database and save it to a file. You can then use a tool such as gsutil to upload the file to Google Cloud Storage. Once the data is i...