Book Image

Simplify Big Data Analytics with Amazon EMR

By : Sakti Mishra
Book Image

Simplify Big Data Analytics with Amazon EMR

By: Sakti Mishra

Overview of this book

Amazon EMR, formerly Amazon Elastic MapReduce, provides a managed Hadoop cluster in Amazon Web Services (AWS) that you can use to implement batch or streaming data pipelines. By gaining expertise in Amazon EMR, you can design and implement data analytics pipelines with persistent or transient EMR clusters in AWS. This book is a practical guide to Amazon EMR for building data pipelines. You'll start by understanding the Amazon EMR architecture, cluster nodes, features, and deployment options, along with their pricing. Next, the book covers the various big data applications that EMR supports. You'll then focus on the advanced configuration of EMR applications, hardware, networking, security, troubleshooting, logging, and the different SDKs and APIs it provides. Later chapters will show you how to implement common Amazon EMR use cases, including batch ETL with Spark, real-time streaming with Spark Streaming, and handling UPSERT in S3 Data Lake with Apache Hudi. Finally, you'll orchestrate your EMR jobs and strategize on-premises Hadoop cluster migration to EMR. In addition to this, you'll explore best practices and cost optimization techniques while implementing your data analytics pipeline in EMR. By the end of this book, you'll be able to build and deploy Hadoop- or Spark-based apps on Amazon EMR and also migrate your existing on-premises Hadoop workloads to AWS.
Table of Contents (19 chapters)
1
Section 1: Overview, Architecture, Big Data Applications, and Common Use Cases of Amazon EMR
6
Section 2: Configuration, Scaling, Data Security, and Governance
11
Section 3: Implementing Common Use Cases and Best Practices

Migrating ETL jobs and Oozie workflows

If you are doing lift and shift and your ETL scripts are configured to read from and write to HDFS, then your existing ETL scripts such as Hive, MapReduce, and Spark will work just fine in EMR without substantial changes. But if, while migrating to AWS, you re-architected to use Amazon S3 as your persistent layer instead of HDFS, then you will have to change your scripts to interact with Amazon S3 (s3://) using EMRFS.

Important Note

Prior to the release of Amazon EMR 5.22.0, EMR supported the s3a:// and s3n:// prefixes to interact with EMRFS. These prefixes haven't been deprecated and still work, but it is now recommended to use the new s3://, which provides a higher level of security and easier integration with Amazon S3.

Apart from your Hive and Spark scripts, if you are using Apache Oozie for workflow orchestration of your ETL jobs, then you need to plan for its migration too. Let's understand what options you have for...