Spark 03: Understanding Resilient Distributed Dataset

You are not qualified as an Apache Spark developer until you know what is a Resilient Distributed Dataset (RDD). It is the fundamental technique to represent data in the Spark memory. There are advanced data representation techniques like DataFrame built on top of RDD. However, it is always better to start with the most basic dataset: RDD. RDD is nothing other than a data structure with some special properties or features.

Spark 03: Understanding Resilient Distributed Dataset

We all know that Apache Spark is a distributed general-purpose cluster-computing framework. There are some common problems faced in a distributed environment including but not limited to:
  1. Remote access of data is expensive
  2. High chance of failure
  3. Runtime errors are expensive and hard to track
  4. Wasting computing power is way too expensive
RDD is designed to address the abovementioned problems. In the following section, you will see the properties of RDD and how it solves these problems.

RDD is Distributed

As the name suggests, Resilient Distributed Dataset is a distributed dataset by nature which means you can safely distribute RDDs across nodes and execute operations on them. This property of RDD solves the problem of expensive remote access of data in distributed computing because an RDD can be migrated to a node which is depending on the data stored in that RDD.

RDD is Resilient & Immutable

Another property of RDD is its ability to recover quickly (Resilient) which comes with an additional requirement: Immutability. When we write a Spark program, we are creating a directed acyclic graph which generates an RDD after every transform operations.
For example, consider the Spark application we developed in Spark 01: Movie Rating Counter:
package com.javahelps.spark

import org.apache.spark.SparkContext

object MovieRatingsCounter {

    def main(args: Array[String]): Unit = {

        val sc = new SparkContext("local[*]", "MovieRatingsCounter")

        // Read a text file
        val data = sc.textFile("/tmp/ml-latest-small/ratings.csv")

        // Extract the first row which is the header
        val header = data.first();

        // Filter out the header from the dataset
        val filteredData = data.filter(row => row != header)

        // Extract rating from line as float
        val ratingData = filteredData.map(line => line.split(',')(2).toFloat)

        val result =  ratingData.countByValue() // Count number of occurrences of each ratings

        println(result)
    }
}
In this code, the sc.textFile function creates the first RDD (referred by the variable data) of String. The next Transformation operation filter creates a new RDD (referred by te variable filteredData). Similarly, the map is another Transformation operation which creates another RDD. Finally, the countByValue is an Action operation which gets the number of occurrences of each rating and returns it as a Scala object. The directed acyclic graph of RDDs created in the above code is shown in the following diagram.

Spark RDD pipeline

The important point to notice here is that none of these Spark operations modify an existing RDD. Instead, they create a new RDD. This behavior is known as immutability of RDD. Immutabe RDDs allow Spark to rebuild an RDD from the previous RDD in the pipeline if there is a failure. For example, in the above pipeline, if the filteredData RDD is failed for some reason, it can be rebuilt by applying the same filter operation on the previous RDD: data. If RDDs are mutable there is no guarantee that the data RDD will be in the same form as it was at the first time Spark called the filter operation on it. By being immutable and resilient, RDD handles the failure of nodes in a distributed environment.

RDD is Compile-time Type-safe

RDDs are type-safe similar to arrays in Java. In the example provided in the last section, the data RDD and the filteredData RDD are RDDs with the type of String. Therefore, you can perform only String related operations on those RDDs. Similarly, ratingData RDD is a Float type RDD so that you can operate Float related operations on that RDD. It helps developers to avoid runtime errors which are hard to track in a distributed runtime.


RDD is Lazy

RDDs support two types of operations: Transformation operation and Action operation. Transformation operations create new RDD from another RDD without modifying them (because RDDs are immutable). The following table lists some of the common transformations supported by Spark.
TransformationMeaning
map(func) Return a new distributed dataset formed by passing each element of the source through a function func.
filter(func) Return a new dataset formed by selecting those elements of the source on which func returns true.
flatMap(func) Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item).
mapPartitions(func) Similar to map, but runs separately on each partition (block) of the RDD, so func must be of type
Iterator<T> => Iterator<U> when running on an RDD of type T.
mapPartitionsWithIndex(func) Similar to mapPartitions, but also provides func with an integer value representing the index of
the partition, so func must be of type (Int, Iterator<T>) => Iterator<U> when running on an RDD of type T.
sample(withReplacement, fraction, seed) Sample a fraction fraction of the data, with or without replacement, using a given random number generator seed.
union(otherDataset) Return a new dataset that contains the union of the elements in the source dataset and the argument.
intersection(otherDataset) Return a new RDD that contains the intersection of elements in the source dataset and the argument.
distinct([numPartitions])) Return a new dataset that contains the distinct elements of the source dataset.
groupByKey([numPartitions]) When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable<V>) pairs.

Note: If you are grouping in order to perform aggregation (such as a sum or
average) over each key, using reduceByKey or aggregateByKey will yield much better
performance.


Note: By default, the level of parallelism in the output depends on the number of partitions of the parent RDD.
You can pass an optional numPartitions argument to set a different number of tasks.
reduceByKey(func, [numPartitions]) When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func, which must be of type (V,V) => V. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.
aggregateByKey(zeroValue)(seqOp, combOp, [numPartitions]) When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral "zero" value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.
sortByKey([ascending], [numPartitions]) When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the boolean ascending argument.
join(otherDataset, [numPartitions]) When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key.
Outer joins are supported through leftOuterJoin, rightOuterJoin, and fullOuterJoin.
cogroup(otherDataset, [numPartitions]) When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable<V>, Iterable<W>)) tuples. This operation is also called groupWith.
cartesian(otherDataset) When called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements).
pipe(command, [envVars]) Pipe each partition of the RDD through a shell command, e.g. a Perl or bash script. RDD elements are written to the
process's stdin and lines output to its stdout are returned as an RDD of strings.
coalesce(numPartitions) Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more efficiently
after filtering down a large dataset.
repartition(numPartitions) Reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them.
This always shuffles all data over the network.
repartitionAndSortWithinPartitions(partitioner) Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys. This is more efficient than calling repartition and then sorting within
each partition because it can push the sorting down into the shuffle machinery.
Action operations are used at the end of a Spark pipeline to get  Scala (or whatever the language you use) object from the final RDD. A Spark program never executes until an action operation is called on the last RDD. This behavior is known as lazy execution and commonly observed in other pipeline architectures including Java Stream API. By being lazy, RDDs avoid wasting the computing power for unwanted transformations. The following table lists some of the common actions supported by Spark.
ActionMeaning
reduce(func) Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.
collect() Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.
count() Return the number of elements in the dataset.
first() Return the first element of the dataset (similar to take(1)).
take(n) Return an array with the first n elements of the dataset.
takeSample(withReplacement, num, [seed]) Return an array with a random sample of num elements of the dataset, with or without replacement, optionally pre-specifying a random number generator seed.
takeOrdered(n, [ordering]) Return the first n elements of the RDD using either their natural order or a custom comparator.
saveAsTextFile(path) Write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file.
saveAsSequenceFile(path) Write the elements of the dataset as a Hadoop SequenceFile in a given path in the local filesystem, HDFS or any other Hadoop-supported file system. This is available on RDDs of key-value pairs that implement Hadoop's Writable interface. In Scala, it is also
available on types that are implicitly convertible to Writable (Spark includes conversions for basic types like Int, Double, String, etc).
saveAsObjectFile(path) Write the elements of the dataset in a simple format using Java serialization, which can then be loaded using SparkContext.objectFile().
countByKey() Only available on RDDs of type (K, V). Returns a hashmap of (K, Int) pairs with the count of each key.
foreach(func) Run a function func on each element of the dataset. This is usually done for side effects such as updating an accumulator or interacting with external storage systems.

Note: modifying variables other than Accumulators outside of the foreach() may result in undefined behavior.
As a Spark developer, always remember that until you call an Action operation on an RDD, your Spark cluster will not do anything for you. For example, if you print the ratingData as shown below instead of calling the countByValue action method, you will get an output saying that you are printing an RDD not the list of ratings.
package com.javahelps.spark

import org.apache.spark.SparkContext

object MovieRatingsCounter {

    def main(args: Array[String]): Unit = {

        val sc = new SparkContext("local[*]", "MovieRatingsCounter")

        // Read a text file
        val data = sc.textFile("/tmp/ml-latest-small/ratings.csv")

        // Extract the first row which is the header
        val header = data.first();

        // Filter out the header from the dataset
        val filteredData = data.filter(row => row != header)

        // Extract rating from line as float
        val ratingData = filteredData.map(line => line.split(',')(2).toFloat)

        println(ratingData)
    }
}

In a nutshell, a Spark developer must know the following facts about RDDs:
  • You cannot modify an RDD because RDDs are immutable
  •  RDDs are lazy so you need to call an action method at the end of a pipeline to get the output
  • Though an RDD may looks like a local variable in your code, at the runtime it can be shared across a cluster of nodes.
If you like to learn more about RDDs, please check the research paper titled Resilient Distributed Datasets: A Fault Tolerant Abstraction for In-Memory Cluster Computing.
Feel free to comment below if you have any questions.
Previous
Next Post »

Contact Form

Name

Email *

Message *