Book Image

Big Data Analytics

By : Venkat Ankam
Book Image

Big Data Analytics

By: Venkat Ankam

Overview of this book

Big Data Analytics book aims at providing the fundamentals of Apache Spark and Hadoop. All Spark components – Spark Core, Spark SQL, DataFrames, Data sets, Conventional Streaming, Structured Streaming, MLlib, Graphx and Hadoop core components – HDFS, MapReduce and Yarn are explored in greater depth with implementation examples on Spark + Hadoop clusters. It is moving away from MapReduce to Spark. So, advantages of Spark over MapReduce are explained at great depth to reap benefits of in-memory speeds. DataFrames API, Data Sources API and new Data set API are explained for building Big Data analytical applications. Real-time data analytics using Spark Streaming with Apache Kafka and HBase is covered to help building streaming applications. New Structured streaming concept is explained with an IOT (Internet of Things) use case. Machine learning techniques are covered using MLLib, ML Pipelines and SparkR and Graph Analytics are covered with GraphX and GraphFrames components of Spark. Readers will also get an opportunity to get started with web based notebooks such as Jupyter, Apache Zeppelin and data flow tool Apache NiFi to analyze and visualize data.
Table of Contents (18 chapters)
Big Data Analytics
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Index

Summary


RDDs are a fundamental unit of data in Spark and Spark programming revolves around creating and performing operations on RDDs such as transformations and actions. Apache Spark programs can be interactively executed in a shell or by submitting applications. Parallelism is defined by the number of partitions in an RDD. The number of partitions is decided by the number of blocks in the HDFS file, or type of resource manager and configuration properties used for non-HDFS files.

Caching RDDs in memory is useful for performing multiple actions on the same RDD as it provides higher performance. When an RDD is cached with the MEMORY_ONLY option, partitions that do not fit in memory will be re-computed as and when needed. If re-compute is expensive, it is better to choose MEMORY_AND_DISK as the persistence level.

Spark's application can be submitted in client or cluster mode. While client mode is used for development and testing, cluster mode is used for production deployment. Spark has three...