Book Image

Fast Data Processing with Spark

By : Holden Karau
Book Image

Fast Data Processing with Spark

By: Holden Karau

Overview of this book

<p>Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.</p> <p>Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.</p> <p>Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.</p> <p>We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).</p>
Table of Contents (16 chapters)
Fast Data Processing with Spark
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Building your Spark job with something else


If neither sbt nor Maven suits your needs, you may decide to use another build system. Thankfully, Spark supports building a fat JAR file with all the dependencies of Spark, which makes it easy to include in the build system of your choice. Simply run sbt/sbt assembly in the Spark directory and copy the resulting assembly JAR file from core/target/spark-core-assembly-0.7.0.jar to your build dependencies, and you are good to go.

Tip

No matter what your build system is, you may find yourself wanting to use a patched version of the Spark libraries. In that case, you can deploy your Spark library locally. I recommend giving it a different version number to ensure that sbt/maven picks up the modified version. You can change the version by editing project/SparkBuild.scala and changing the version := part of the code. If you are using sbt, you should run an sbt/sbt update in the project that is importing the custom version. For other build systems, you...