Trending October 2023 # Syntax And Examples Of Spark Repartition # Suggested November 2023 # Top 15 Popular |

Trending October 2023 # Syntax And Examples Of Spark Repartition # Suggested November 2023 # Top 15 Popular

You are reading the article Syntax And Examples Of Spark Repartition updated in October 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested November 2023 Syntax And Examples Of Spark Repartition

Introduction to Spark Repartition

The repartition() method is used to increase or decrease the number of partitions of an RDD or dataframe in spark. This method performs a full shuffle of data across all the nodes. It creates partitions of more or less equal in size. This is a costly operation given that it involves data movement all over the network. Partitions play an important in the degree of parallelism. The number of parallel tasks running in each stage is equal to the number of partitions. Thus, we can control parallelism using the repartition()method. It also plays a role in deciding the no of files generated in the output.

Start Your Free Data Science Course



Here, obj is an RDD or data frame and numPartitions is a number signifying the number of partitions we want to create.

How to Use Spark Repartition?

In order to use the method following steps have to be followed:

We can create an RDD/dataframe by a) loading data from external sources like hdfs or databases like Cassandra b) calling parallelize()method on a spark context object and pass a collection as the parameter (and then invoking toDf() if we need to a dataframe)

Next is to decide an appropriate value of numPartitions. We cannot choose a very large or very small value of numPartitions. This is because if we choose a very large value then a large no of files will be generated and it will be difficult for the hdfs system to maintain the metadata. On the other hand, if we choose a very small value then data in each partition will be huge and will take a lot of time to process.

Examples of Spark Repartition

Following are the examples of spark repartition:

Example #1 – On RDDs



We are sorting the output based on the no of cases in a descending manner so as to fit some top-most affected states in the output.

Spark UI:

As we created 10 partitions, the last two stages are spawning 10 tasks.

Example #2 – On Dataframes

Let’s consider the same problem as example 1, but this time we are going to solve using dataframes and spark-sql.

spark.sql(“select state,SUM(cases) as cases from tempTable where date=’2023-04-10′ group by state order by cases desc”).show(10,false)

Here we created a schema first. Then while reading the csv file we imposed the defined schema in order to create a dataframe. Then, we called the repartition method and changed the partitions to 10. After this, we registered the dataframenewDf as a temp table. Finally, we wrote a spark sql query to get the required result. Please note that we don’t have any method to get the number of partitions from a dataframe directly. We have to convert a dataframe to RDD and then call the getNumPartitions method to get the number of partitions available.


We are printing only top-ten states here and the results are matching with that calculated in the previous example.

Recommended Articles

This is a guide to Spark Repartition. Here we also discuss the introduction and how to use spark repartition along with different examples and its code implementation. you may also have a look at the following articles to learn more –

You're reading Syntax And Examples Of Spark Repartition

Update the detailed information about Syntax And Examples Of Spark Repartition on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!