Book Image

Apache Spark 2: Data Processing and Real-Time Analytics

By : Romeo Kienzler, Md. Rezaul Karim, Sridhar Alla, Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen Mei
Book Image

Apache Spark 2: Data Processing and Real-Time Analytics

By: Romeo Kienzler, Md. Rezaul Karim, Sridhar Alla, Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen Mei

Overview of this book

Apache Spark is an in-memory, cluster-based data processing system that provides a wide range of functionalities such as big data processing, analytics, machine learning, and more. With this Learning Path, you can take your knowledge of Apache Spark to the next level by learning how to expand Spark's functionality and building your own data flow and machine learning programs on this platform. You will work with the different modules in Apache Spark, such as interactive querying with Spark SQL, using DataFrames and datasets, implementing streaming analytics with Spark Streaming, and applying machine learning and deep learning techniques on Spark using MLlib and various external tools. By the end of this elaborately designed Learning Path, you will have all the knowledge you need to master Apache Spark, and build your own big data processing and analytics pipeline quickly and without any hassle. This Learning Path includes content from the following Packt products: • Mastering Apache Spark 2.x by Romeo Kienzler • Scala and Spark for Big Data Analytics by Md. Rezaul Karim, Sridhar Alla • Apache Spark 2.x Machine Learning Cookbook by Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen MeiCookbook
Table of Contents (23 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Spark configuration


There are a number of ways to configure your Spark jobs. In this section, we will discuss these ways. More specifically, according to Spark 2.x release, there are three locations to configure the system:

  • Spark properties
  • Environmental variables
  • Logging

Spark properties

As discussed previously, Spark properties control most of the application-specific parameters and can be set using a SparkConf object of Spark. Alternatively, these parameters can be set through the Java system properties. SparkConf allows you to configure some of the common properties as follows:

setAppName() // App name 
setMaster() // Master URL 
setSparkHome() // Set the location where Spark is installed on worker nodes. 
setExecutorEnv() // Set single or multiple environment variables to be used when launching executors. 
setJars() // Set JAR files to distribute to the cluster. 
setAll() // Set multiple parameters together.

An application can be configured to use a number of available cores on your machine...