Book Image

Apache Spark with Scala – Hands-On with Big Data! [Video]

By : Frank Kane
5 (1)
Book Image

Apache Spark with Scala – Hands-On with Big Data! [Video]

5 (1)
By: Frank Kane

Overview of this book

“Big data” analysis is a hot and highly valuable skill—and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, eBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive datasets across a fault-tolerant Hadoop cluster. You will learn those same techniques using your own Windows system right at home. It is easier than you think, and you will learn from an ex-engineer and senior manager from Amazon and IMDb. In this course, you will learn the concepts of Spark’s Resilient Distributed Datasets, DataFrames, and datasets. We will also cover a crash course in the Scala programming language that will help you with the course. You will learn how to develop and run Spark jobs quickly using Scala, IntelliJ, and SBT. You will learn how to translate complex analysis problems into iterative or multi-stage Spark scripts. You will learn how to scale up to larger datasets using Amazon’s Elastic MapReduce service and understand how Hadoop YARN distributes Spark across computing clusters. We will also be practicing using other Spark technologies, such as Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX. By the end of this course, you will be running code that analyzes gigabytes worth of information—in the cloud—in a matter of minutes. All the codes and supporting files for this course are available at https://github.com/PacktPublishing/Apache-Spark-with-Scala---Hands-On-with-Big-Data-
Table of Contents (10 chapters)
10
You Made It! Where to Go from Here
Chapter 8
Introduction to Spark Streaming
Content Locked
Section 1
The DStream API for Spark Streaming
Spark Streaming allows you to create Spark driver scripts that run indefinitely, continually processing data as it streams in! We will cover how it works and what it can do, using the original DStream micro-batch API.