Book Image

Big Data Processing with Apache Spark

By : John Bura
Book Image

Big Data Processing with Apache Spark

By: John Bura

Overview of this book

Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. Big Data Processing with Apache Spark teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects. The code bundle for this course is available at https://github.com/TrainingByPackt/Big-Data-Processing-with-Apache-Spark
Table of Contents (4 chapters)
Chapter 3
Spark Streaming Integration with AWS
Content Locked
Section 4
AWS S3 Basic Functionality
Amazon S3 is a storage system where you can store data within resources called buckets, place objects in a bucket and perform operations such as write, read, and delete. The maximum size allowed for every object in a bucket is up to 5 TB. Let us learn further about all this and much more, in this section.