Book Image

Serverless ETL and Analytics with AWS Glue

By : Vishal Pathak, Subramanya Vajiraya, Noritaka Sekiyama, Tomohiro Tanaka, Albert Quiroga, Ishan Gaur
Book Image

Serverless ETL and Analytics with AWS Glue

By: Vishal Pathak, Subramanya Vajiraya, Noritaka Sekiyama, Tomohiro Tanaka, Albert Quiroga, Ishan Gaur

Overview of this book

Organizations these days have gravitated toward services such as AWS Glue that undertake undifferentiated heavy lifting and provide serverless Spark, enabling you to create and manage data lakes in a serverless fashion. This guide shows you how AWS Glue can be used to solve real-world problems along with helping you learn about data processing, data integration, and building data lakes. Beginning with AWS Glue basics, this book teaches you how to perform various aspects of data analysis such as ad hoc queries, data visualization, and real-time analysis using this service. It also provides a walk-through of CI/CD for AWS Glue and how to shift left on quality using automated regression tests. You’ll find out how data security aspects such as access control, encryption, auditing, and networking are implemented, as well as getting to grips with useful techniques such as picking the right file format, compression, partitioning, and bucketing. As you advance, you’ll discover AWS Glue features such as crawlers, Lake Formation, governed tables, lineage, DataBrew, Glue Studio, and custom connectors. The concluding chapters help you to understand various performance tuning, troubleshooting, and monitoring options. By the end of this AWS book, you’ll be able to create, manage, troubleshoot, and deploy ETL pipelines using AWS Glue.
Table of Contents (20 chapters)
1
Section 1 – Introduction, Concepts, and the Basics of AWS Glue
5
Section 2 – Data Preparation, Management, and Security
13
Section 3 – Tuning, Monitoring, Data Lake Common Scenarios, and Interesting Edge Cases

Data lakehouse

Challenged by the newer demands to derive value from the vast and ever-increasing unstructured data, it became important to come up with a new arrangement that does not try to force unstructured data into the strict models of a data warehouse. The data lakehouse blurs the lines between data lakes and data warehouses by enabling the atomicity, consistency, isolation, and durability (ACID) properties on the data in the data lake and enabling multiple processes to concurrently read and write data.

With this, transformed data in open formats such as Apache Parquet can be consumed for feature engineering and machine learning (ML) workloads and can also be used for analytics.