Shuffling in Spark is a mechanism that Re-Distributes the data across different executors or workers in the clusters.
Why do we need to Re-Distribute the data?
A) Re-Distribution is needed when there is a need of increasing or decreasing the data partitions in the situations below:
- When the partitions are not sufficient enough to process the data load in the cluster
- When the partitions are too high in numbers that it creates task scheduling overhead and it becomes the bottleneck in the processing time.
Re-Distribution can also be achieved by executing the shuffling on existing distributed data collection like RDD, DataFrames, etc by using the "Repartition" and "Coalesce" APIs in Spark.
B) During Aggregation and Joins on data collection in Spark, all the data records belonging to aggregation or join should reside in the single partition and when the existing partitioning scheme doesn't satisfy this condition there is a need to re-distributing the data in input collections before performing the aggregations or joins.
Spark Shuffling is an expensive process as it is moving around data among different executors or workers in the cluster. Imagine, if you have 1000s of workers and have a huge volume of data, in this case, the Spark Shuffling can lead to expensive overheads.
Generally, Spark Shuffling involves
- Disk I/O
- Network I/O
- Data Serialization & Deserialization
Some of the operations that might lead to the Shuffling are
- join
- cogroup
- groupWith
- join
- leftOuterJoin
- rightOuterJoin
- groupByKey
- reduceByKey
- repartition
- coalesce
Data Engineers should always consider the possibility where they can avoid Shuffling in their process. For Example, they can choose to use reduceByKey instead of groupByKey due to the following reasons.
- groupByKey shuffles all the data and therefore it is too slow.
- reduceByKey shuffles only the sub-aggregation results in the partitions.
To summarize, Shuffling is one of the important operations of Spark that re-distributes the data to make the pipelines more efficient. But we are also aware of how expensive Shuffling could be especially when the number of workers in a cluster is too high or the volume of data is huge. So one should avoid the Shuffling in their pipelines. Also at the same time, one should also understand the concept clearly that could help them to design Fault-tolerant, Robust, and Reliable Data Pipelines.
Comments
Post a Comment