Book Image

Big Data Processing with Apache Spark

By : John Bura
Book Image

Big Data Processing with Apache Spark

By: John Bura

Overview of this book

Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. Big Data Processing with Apache Spark teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects. The code bundle for this course is available at https://github.com/TrainingByPackt/Big-Data-Processing-with-Apache-Spark
Table of Contents (4 chapters)
Chapter 1
Introduction to Spark Distributed Processing
Content Locked
Section 9
Introduction to SQL, Datasets, and DataFrames
Before we understand how each of these function along with Spark, let us first know what they mean. A dataset is a distributed collection that provides additional metadata about the structure of the data that is stored. A DataFrame is a dataset that organizes information into named columns. DataFrames can be built from different sources, such as JSON, XML, and databases. In this section, let us cover each of them in detail. For further information on movielens datasets, do check this link - https://grouplens.org/datasets/movielens/