Book Image

Serverless ETL and Analytics with AWS Glue

By : Vishal Pathak, Subramanya Vajiraya, Noritaka Sekiyama, Tomohiro Tanaka, Albert Quiroga, Ishan Gaur
Book Image

Serverless ETL and Analytics with AWS Glue

By: Vishal Pathak, Subramanya Vajiraya, Noritaka Sekiyama, Tomohiro Tanaka, Albert Quiroga, Ishan Gaur

Overview of this book

Organizations these days have gravitated toward services such as AWS Glue that undertake undifferentiated heavy lifting and provide serverless Spark, enabling you to create and manage data lakes in a serverless fashion. This guide shows you how AWS Glue can be used to solve real-world problems along with helping you learn about data processing, data integration, and building data lakes. Beginning with AWS Glue basics, this book teaches you how to perform various aspects of data analysis such as ad hoc queries, data visualization, and real-time analysis using this service. It also provides a walk-through of CI/CD for AWS Glue and how to shift left on quality using automated regression tests. You’ll find out how data security aspects such as access control, encryption, auditing, and networking are implemented, as well as getting to grips with useful techniques such as picking the right file format, compression, partitioning, and bucketing. As you advance, you’ll discover AWS Glue features such as crawlers, Lake Formation, governed tables, lineage, DataBrew, Glue Studio, and custom connectors. The concluding chapters help you to understand various performance tuning, troubleshooting, and monitoring options. By the end of this AWS book, you’ll be able to create, manage, troubleshoot, and deploy ETL pipelines using AWS Glue.
Table of Contents (20 chapters)
1
Section 1 – Introduction, Concepts, and the Basics of AWS Glue
5
Section 2 – Data Preparation, Management, and Security
13
Section 3 – Tuning, Monitoring, Data Lake Common Scenarios, and Interesting Edge Cases

Chapter 15: Architecting Data Lakes for Real-World Scenarios and Edge Cases

We are now well versed in the concept of a data lake, a centralized repository that allows you to store all your structured and unstructured data at any scale. Since a data lake primarily focuses on storage, it does not require as much processing power as other methods (such as the data warehouse), making it easier, faster, and more cost-effective to scale up as data volumes grow.

The data lake is not just a repository – it requires a well-designed data architecture, along with proper planning and management. As it is driven by a data-based design, it helps you rapidly ingest raw data before any business requirements come into the picture. There are a variety of tools you can use for ingesting raw data into a data lake, including ETL tools such as Ab Initio, Informatica, and DataStage.

This chapter mainly covers practical examples of real-world data problems that exhibit certain bottlenecks and...