Spark Interview Questions and Answers
Q.What is the difference between Spark and Hadoop?
Spark is lightning fast cluster computing tool. Apache Spark runs applications up to 100x faster in memory and 10x faster on disk than Hadoop. Because of reducing the number of read/write cycle to disk and storing intermediate data in-memory Spark makes it possible.
– MapReduce reads and writes from disk, as a result, it slows down the processing speed.
– Spark is easy to program as it has tons of high-level operators with RDD – Resilient Distributed Dataset.
– In MapReduce, developers need to hand code each and every operation which makes it very difficult to work.
3.Easy to Manage
– Spark is capable of performing batch, interactive and Machine Learning and Streaming all in the same cluster. As a result makes it a complete data analytics engine. Thus, no need to manage different component for each need. Installing Spark on a cluster will be enough to handle all the requirements.
– As MapReduce only provides the batch engine. Hence, we are dependent on different engines. For example- Storm, Giraph, Impala, etc. for other requirements. So, it is very difficult to manage many components.
– It can process real time data i.e. data coming from the real-time event streams at the rate of millions of events per second, e.g. Twitter data for instance or Facebook sharing/posting. Spark’s strength is the ability to process live streams efficiently.
– MapReduce fails when it comes to real-time data processing as it was designed to perform batch processing on voluminous amounts of data.
– Spark is fault-tolerant. As a result, there is no need to restart the application from scratch in case of any failure.
– Like Apache Spark, MapReduce is also fault-tolerant, so there is no need to restart the application from scratch in case of any failure.
– Spark is little less secure in comparison to MapReduce because it supports the only authentication through shared secret password authentication.
– Apache Hadoop MapReduce is more secure because of Kerberos and it also supports Access Control Lists (ACLs) which are a traditional file permission model.
Q.Why is Parquet used for Spark SQL?
Parquet is a columnar format, supported by many data processing systems.Apache Parquet as a file format has garnered significant attention recently. Let’s say you have a table with 100 columns, most of the time you are going to access 3-10 columns. In Row oriented format all columns are scanned whether you need them or not. The advantages of having a columnar storage are as follows −
- Columnar storage limits IO operations.
- Columnar storage can fetch specific columns that you need to access.
- Columnar storage consumes less space.
- Columnar storage gives better-summarized data and follows type-specific encoding.
Spark SQL provides support for both reading and writing parquet files that automatically capture the schema of the original data. Like JSON datasets, parquet files follow the same procedure.
Apache Parquet saves data in column oriented fashion, so if you need 3 columns, only data of those 3 columns get loaded. Another benefit is that since all data in a given column is the same datatype (obviously), compression quality is far superior.
Q.Why Spark, even Hadoop exists?
Below are few reasons.
Generally MapReduce is not good to process iterative algorithms like Machine Learning and Graph processing. Graph and Machine Learning algorithms are iterative by nature and less saves to disk, this type of algorithm needs data in memory to run algorithm steps again and again or less transfers over network means better performance.
In Memory Processing:
MapReduce uses disk storage for storing processed intermediate data and also read from disks which is not good for fast processing. . Because Spark keeps data in Memory (Configurable), which saves lot of time, by not reading and writing data to disk as it happens in case of Hadoop.
Near real-time data processing:
Spark also supports near real-time streaming workloads via Spark Streaming application framework.
Q.Why both Spark and Hadoop needed?
Spark is often called cluster computing engine or simply execution engine. Spark uses many concepts from Hadoop MapReduce. Both Spark and Hadoop work together well. Spark with HDFS and YARN gives better performance and also simplifies the work distribution on cluster. As HDFS is storage engine for storing huge volume of data and Spark as a processing engine (In memory as well as more efficient data processing).
HDFS: It is used as a Storage engine for Spark as well as Hadoop.
YARN: It is a framework to manage Cluster using pluggable scedular.
Run other than MapReduce: With Spark you can run MapReduce algorithm as well as other higher level of operators for instance map(), filter(), reduceByKey(), groupByKey() etc.
Q.How can you use Machine Learning library “SciKit library” which is written in Python, with Spark engine?
Machine learning tool written in Python, e.g. SciKit library, can be used as a Pipeline API in Spark MLlib or calling pipe().
Q.Why Spark is good at low-latency iterative workloads e.g. Graphs and Machine Learning?
Machine Learning algorithms for instance logistic regression require many iterations before creating optimal resulting model. And similarly in graph algorithms which traverse all the nodes and edges. Any algorithm which needs many iteration before creating results can increase their performance when the intermediate partial results are stored in memory or at very fast solid state drives.
Spark can cache/store intermediate data in memory for faster model building and training.
Also, when graph algorithms are processed then it traverses graphs one connection per iteration with the partial result in memory. Less disk access and network traffic can make a huge difference when you need to process lots of data.
Q.Which all are the, ways to configure Spark Properties and order them least important to the most important.
There are the following ways to set up properties for Spark and user programs (in the order of importance from the least important to the most important):
Q.What is the Default level of parallelism in Spark?
- conf/spark-defaults.conf - the default
- --conf - the command line option used by spark-shell and spark-submit
Default level of parallelism is the number of partitions when not specified explicitly by a user.
Q.What are the differences between functional and imperative languages, and why is functional programming important?
Following features of Scala makes it uniquely suitable for Spark.
Immutable means that you can't change your variables; you mark them as final in Java, or use the val keyword in Scala
Higher order functions -
These are functions that take other functions as parameters, or whose result is a function. Here is a function apply which takes another function f and a value v and applies function f to v: example -
def apply(f: Int => String, v: Int) = f(v)
Lazy loading -
Lazy val is executed when it is accessed the first time else no execution.
Pattern matching - Scala has a built-in general pattern matching mechanism. It allows to match on any sort of data with a first-match policy
If we turn this into a function object that we can assign or pass around, the signature of that function looks like this: val sizeConstraintFn: IntPairPred => Int => Email => Boolean = sizeConstraint _ Such a chain of one-parameter functions is called a curried function
Partial application -
When applying the function, you do not pass in arguments for all of the parameters defined by the function, but only for some of them, leaving the remaining ones blank. What you get back is a new function whose parameter list only contains those parameters from the original function that were left blank.
Most Scala collections are monadic, and operating on them using map and flatMap operations, or using for-comprehensions is referred to as monadic-style.
Q.Is it possible to have multiple SparkContext in single JVM?
Yes, spark.driver.allowMultipleContexts is true (default: false ). If true Spark logs warnings instead of throwing exceptions when multiple SparkContexts are active, i.e. multiple SparkContext are running in this JVM. When creating an instance of SparkContex.
Q.Can RDD be shared between SparkContexts?
No, When an RDD is created; it belongs to and is completely owned by the Spark context it
originated from. RDDs can’t be shared between “parkContexts.
Q.In Spark-Shell, which all contexts are available by default?
SparkContext and SQLContext
Q.Give few examples , how RDD can be created using SparkContext
SparkContext allows you to create many different RDDs from input sources like:
Q.How would you brodcast, collection of values over the Sperk executors?
- “cala’s collections: i.e. sc.parallelize(0 to 100)
- Local or remote filesystems : sc.textFile("README.md")
- Any Hadoop InputSource : using sc.newAPIHadoopFile
Q.What is the advantage of broadcasting values across Spark Cluster?
Spark transfers the value to Spark executors once, and tasks can share it without incurring repetitive network transmissions when requested multiple times.
Q.Can we broadcast an RDD?
Yes, you should not broadcast a RDD to use in tasks and Spark will warn you. It will not stop you, though.
Q.How can we distribute JARs to workers?
The jar you specify with SparkContext.addJar will be copied to all the worker nodes.
Q.How can you stop SparkContext and what is the impact if stopped?
You can stop a Spark context using SparkContext.stop() method. Stopping a Spark context stops the Spark Runtime Environment and effectively shuts down the entire Spark application.
Q.Which scheduler is used by SparkContext by default?
By default, SparkContext uses DAGScheduler , but you can develop your own custom DAGScheduler implementation.
Q.How would you the amount of memory to allocate to each executor?
SPARK_EXECUTOR_MEMORY sets the amount of memory to allocate to each executor.
Q.How do you define RDD?
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. It represents an immutable, partitioned collection of elements that can be operated on in parallel. Resilient Distributed Datasets (RDDs) are a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner.
Fault-tolerant and so able to recomputed missing or damaged partitions on node failures with the help of RDD lineage graph.
is a collection of partitioned data.
Q.What is Lazy evaluated RDD mean?
Lazy evaluated, i.e. the data inside RDD is not available or transformed until an action is executed that triggers the execution.
Q.How would you control the number of partitions of a RDD?
You can control the number of partitions of a RDD using repartition or coalesce operations.
Q.What are the possible operations on RDD
RDDs support two kinds of operations:
transformations - lazy operations that return another RDD.
actions - operations that trigger computation and return values.
Q.How RDD helps parallel job processing?
Spark does jobs in parallel, and RDDs are split into partitions to be processed and written in parallel. Inside a partition, data is processed sequentially.
Q.What is the transformation?
A transformation is a lazy operation on a RDD that returns another RDD, like map , flatMap
, filter , reduceByKey , join , cogroup , etc. Transformations are lazy and are not executed immediately, but only after an action have been executed.
Q.How do you define actions?
An action is an operation that triggers execution of RDD transformations and returns a value (to a Spark driver - the user program). They trigger execution of RDD transformations to return values. Simply put, an action evaluates the RDD lineage graph.
You can think of actions as a valve and until no action is fired, the data to be processed is not even in the pipes, i.e. transformations. Only actions can materialize the entire processing pipeline with real data.
Q.How can you create an RDD for a text file?
Q.What is Preferred Locations ?
A preferred location (aka locality preferences or placement preferences) is a block location for an HDFS file where to compute each partition on.
def getPreferredLocations(split: Partition): Seq specifies placement preferences for a partition in an RDD.
Q.What is a RDD Lineage Graph ?
A RDD Lineage Graph (aka RDD operator graph) is a graph of the parent RDD of a RDD. It is built as a result of applying transformations to the RDD. A RDD lineage graph is hence a graph of what transformations need to be executed after an action has been called.
Spark Interview Questions Spark Interview Questions and Answers
Q.how execution starts and end on RDD or Spark Job
Execution Plan starts with the earliest RDDs (those with no dependencies on other RDDs or reference cached data) and ends with the RDD that produces the result of the action that has been called to execute.
Q.Give example of transformations that do trigger jobs
There are a couple of transformations that do trigger jobs, e.g. sortBy , zipWithIndex , etc.
Q.How many type of transformations exist?
There are two kinds of transformations:
Q.What is Narrow Transformations?
- narrow transformations
- wide transformations
Narrow transformations are the result of map, filter and such that is from the data from a single partition only, i.e. it is self-sustained.
An output RDD has partitions with records that originate from a single partition in the parent RDD. Only a limited subset of partitions used to calculate the result. Spark groups narrow transformations as a stage.
Q.What is wide Transformations?
Wide transformations are the result of groupByKey and reduceByKey . The data required to compute the records in a single partition may reside in many partitions of the parent RDD.
All of the tuples with the same key must end up in the same partition, processed by the same task. To satisfy these operations, Spark must execute RDD shuffle, which transfers data across cluster and results in a new stage with a new set of partitions. (54)
Q.Data is spread in all the nodes of cluster, how spark tries to process this data?
By default, Spark tries to read data into an RDD from the nodes that are close to it. Since Spark usually accesses distributed partitioned data, to optimize transformation operations it creates partitions to hold the data chunks
Q.How would you hint, minimum number of partitions while transformation ?
You can request for the minimum number of partitions, using the second input parameter to many transformations.
scala> sc.parallelize(1 to 100, 2).count
Preferred way to set up the number of partitions for an RDD is to directly pass it as the second input parameter in the call like rdd = sc.textFile("hdfs://… /file.txt", 400) , where400 is the number of partitions. In this case, the partitioning makes for 400 splits that would be done by the Hadoop’s TextInputFormat , not “park and it would work much faster. It’salso that the code spawns 400 concurrent tasks to try to load file.txt directly into 400 partitions.
Q.How many concurrent task Spark can run for an RDD partition?
Spark can only run 1 concurrent task for every partition of an RDD, up to the number of cores in your cluster. So if you have a cluster with 50 cores, you want your RDDs to at least have 50 partitions (and probably 2-3x times that).
As far as choosing a "good" number of partitions, you generally want at least as many as the number of executors for parallelism. You can get this computed value by calling sc.defaultParallelism .
Q.Which limits the maximum size of a partition?
The maximum size of a partition is ultimately limited by the available memory of an executor.
Q.When Spark works with file.txt.gz, how many partitions can be created?
When using textFile with compressed files ( file.txt.gz not file.txt or similar), Spark disables splitting that makes for an RDD with only 1 partition (as reads against gzipped files cannot be parallelized). In this case, to change the number of partitions you should do repartitioning.
Please note that Spark disables splitting for compressed files and creates RDDs with only 1 partition. In such cases, it’s helpful to use sc.textFile('demo.gz') and do repartitioning using rdd.repartition(100) as follows:
rdd = sc.textFile('demo.gz') rdd = rdd.repartition(100)
With the lines, you end up with rdd to be exactly 100 partitions of roughly equal in size.
Q.What is coalesce transformation?
The coalesce transformation is used to change the number of partitions. It can trigger RDD shuffling depending on the second shuffle boolean input parameter (defaults to false ).
Q.What is the difference between cache() and persist() method of RDD
RDDs can be cached (using RDD’s cache() operation) or persisted (using RDD’s persist(newLevel: StorageLevel) operation). The cache() operation is a synonym of persist() that uses the default storage level MEMORY_ONLY .
Q.You have RDD storage level defined as MEMORY_ONLY_2 , what does _2 means ?
number _2 in the name denotes 2 replicas
Q.What is Shuffling?
Shuffling is a process of repartitioning (redistributing) data across partitions and may cause moving it across JVMs or even network when it is redistributed among executors.
Avoid shuffling at all cost. Think about ways to leverage existing partitions. Leverage partial aggregation to reduce data transfer.
Q.Does shuffling change the number of partitions?
No, By default, shuffling doesn’t change the number of partitions, but their content
Q.What is the difference between groupByKey and use reduceByKey ?
Avoid groupByKey and use reduceByKey or combineByKey instead. groupByKey shuffles all the data, which is slow.
reduceByKey shuffles only the results of sub-aggregations in each partition of the data.
Q.When you call join operation on two pair RDDs e.g. (K, V) and (K, W), what is the result?
When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key
Q.What is checkpointing?
Checkpointing is a process of truncating RDD lineage graph and saving it to a reliable distributed (HDFS) or local file system. RDD checkpointing that saves the actual intermediate RDD data to a reliable distributed file system.
You mark an RDD for checkpointing by calling RDD.checkpoint() . The RDD will be saved to a file inside the checkpoint directory and all references to its parent RDDs will be removed. This function has to be called before any job has been executed on this RDD.
Q.What do you mean by Dependencies in RDD lineage graph?
Dependency is a connection between RDDs after applying a transformation.
Q.Which script will you use Spark Application, using spark-shell ?
You use spark-submit script to launch a Spark application, i.e. submit the application to a Spark deployment environment.
Q.Define Spark architecture
Spark uses a master/worker architecture. There is a driver that talks to a single coordinator called master that manages workers in which executors run. The driver and the executors run in their own Java processes.
Q.What is the purpose of Driver in Spark Architecture?
A Spark driver is the process that creates and owns an instance of SparkContext. It is your Spark application that launches the main method in which the instance of SparkContext is created.
Q.Can you define the purpose of master in Spark architecture?
- Drive splits a Spark application into tasks and schedules them to run on executors.
- A driver is where the task scheduler lives and spawns tasks across workers.
- A driver coordinates workers and overall execution of tasks.
A master is a running Spark instance that connects to a cluster manager for resources. The master acquires cluster nodes to run executors.
Q.What are the workers?
Workers or slaves are running Spark instances where executors live to execute tasks. They are the compute nodes in Spark. A worker receives serialized/marshalled tasks that it runs in a thread pool.
Q.Please explain, how worker’s work, when a new Job submitted to them?
When SparkContext is created, each worker starts one executor. This is a separate java process or you can say new JVM, and it loads application jar in this JVM. Now executors connect back to your driver program and driver send them commands, like, foreach, filter, map etc. As soon as the driver quits, the executors shut down
Q.Please define executors in detail?
Executors are distributed agents responsible for executing tasks. Executors provide in- memory storage for RDDs that are cached in Spark applications. When executors are started they register themselves with the driver and communicate directly to execute tasks.
Q.What is DAGSchedular and how it performs?
DAGScheduler is the scheduling layer of Apache Spark that implements stage-oriented scheduling, i.e. after an RDD action has been called it becomes a job that is then transformed into a set of stages that are submitted as TaskSets for execution.
DAGScheduler uses an event queue architecture in which a thread can post DAGSchedulerEvent events, e.g. a new job or stage being submitted, that DAGScheduler reads and executes sequentially.
Q.What is stage, with regards to Spark Job execution?
A stage is a set of parallel tasks, one per partition of an RDD, that compute partial results of a function executed as part of a Spark job.
Q.What is Task, with regards to Spark Job execution?
Task is an individual unit of work for executors to run. It is an individual unit of physical execution (computation) that runs on a single machine for parts of your Spark application on a data. All tasks in a stage should be completed before moving on to another stage.
Q.What is Speculative Execution of a tasks?
- A task can also be considered a computation in a stage on a partition in a given job attempt.
- A Task belongs to a single stage and operates on a single partition (a part of an RDD).
- Tasks are spawned one by one for each stage and data partition.
Speculative tasks or task strugglers are tasks that run slower than most of the all tasks in a job.
Speculative execution of tasks is a health-check procedure that checks for tasks to be speculated, i.e. running slower in a stage than the median of all successfully completed tasks in a taskset . Such slow tasks will be re-launched in another worker. It will not stop the slow tasks, but run a new copy in parallel.
Q.Which all cluster manager can be used with Spark?
Apache Mesos, Hadoop YARN, Spark standalone and
Spark local: Local node or on single JVM. Drivers and executor runs in same JVM. In this case same node will be used for execution.
Q.What is a BlockManager?
Block Manager is a key-value store for blocks that acts as a cache. It runs on every node, i.e. a driver and executors, in a Spark runtime environment. It provides interfaces for putting and retrieving blocks both locally and remotely into various stores, i.e. memory, disk, and offheap.
A BlockManager manages the storage for most of the data in Spark, i.e. block that represent a cached RDD partition, intermediate shuffle data, and broadcast data.
Q.What is Data locality / placement?
Spark relies on data locality or data placement or proximity to data source, that makes Spark jobs sensitive to where the data is located. It is therefore important to have Spark running on Hadoop YARN cluster if the data comes from HDFS.
With HDFS the Spark driver contacts NameNode about the DataNodes (ideally local) containing the various blocks of a file or directory as well as their locations (represented as InputSplits ), and then schedules the work to the “parkWorkers. “park’s compute nodes / workers should be running on storage nodes.
Q.What is master URL in local mode?
You can run Spark in local mode using local , local or the most general local. The URL says how many threads can be used in total:
Q.Define components of YARN
- local uses 1 thread only.
- local uses n threads.
- local uses as many threads as the number of processors available to the Java virtual machine (it uses Runtime.getRuntime.availableProcessors() to know the number).
YARN components are below
runs as a master daemon and manages ApplicationMasters and NodeManagers.
is a lightweight process that coordinates the execution of tasks of an application and asks the ResourceManager for resource containers for tasks. It monitors tasks, restarts failed ones, etc. It can run any type of tasks, be them MapReduce tasks or Giraph tasks, or Spark tasks.
NodeManager offers resources (memory and CPU) as resource containers.
can run tasks, including ApplicationMasters.
Q.What is a Broadcast Variable?
: Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks.
Q.How can you define Spark Accumulators?
This are similar to counters in Hadoop MapReduce framework, which gives information regarding completion of tasks, or how much data is processed etc.
Q.What all are the data sources Spark can process? Answer:
Q.What is Apache Parquet format?
- Hadoop File System (HDFS)
- Cassandra (NoSQL databases)
- HBase (NoSQL database)
- S3 (Amazon WebService Storage : AWS Cloud)
Answer: Apache Parquet is a columnar storage format
Q.What is Apache Spark Streaming?
Spark Streaming helps to process live stream data. Data can be ingested from many sources like Kafka, Flume, Twitter, ZeroMQ, Kinesis, or TCP sockets, and can be processed using complex algorithms expressed with high-level functions like map, reduce, join and window.
contact for more on Spark Online Training
Spark Interview Questions Tags
spark interview questions and answers,spark online training, spark interview questions, spark training online, spark training, spark training institute, latest spark interview questions, best spark interview questions 2019, top 100 spark interview questions,sample spark interview questions,spark interview questions technical, best spark interview tips, best spark interview basics, spark Interview techniques,spark Interview Tips.
For online training videos