Book Image

Fast Data Processing with Spark

By : Holden Karau
Book Image

Fast Data Processing with Spark

By: Holden Karau

Overview of this book

<p>Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.</p> <p>Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.</p> <p>Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.</p> <p>We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).</p>
Table of Contents (16 chapters)
Fast Data Processing with Spark
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Testing in Java and Scala


For the sake of simplicity, this chapter will look at using ScalaTest and JUnit as the testing libraries. ScalaTest can be used to test both Scala and Java code and is the testing library currently used in Spark. JUnit is a popular testing framework for Java.

Refactoring your code for testability

If you have code that can be isolated from the RDD interaction or SparkContext interaction, this code can be tested using standard methodologies. While it can be quite convenient to use anonymous functions when writing Spark code, by giving them names, you can test them more easily without having to deal with the expensive overhead of setting up SparkContext. For example, in your Scala CSV parser, you could had this hard to test code:

  val splitLines = inFile.map(line => {
      val reader = new CSVReader(new StringReader(line))
      reader.readNext().map(_.toDouble)
  }

Or in Java you had:

JavaRDD<Integer[]> splitLines = inFile.flatMap(new FlatMapFunction<String...