Book Image

Data Engineering with AWS

By : Gareth Eagar
Book Image

Data Engineering with AWS

By: Gareth Eagar

Overview of this book

Written by a Senior Data Architect with over twenty-five years of experience in the business, Data Engineering for AWS is a book whose sole aim is to make you proficient in using the AWS ecosystem. Using a thorough and hands-on approach to data, this book will give aspiring and new data engineers a solid theoretical and practical foundation to succeed with AWS. As you progress, you’ll be taken through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. You’ll also learn about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data. By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.
Table of Contents (19 chapters)
1
Section 1: AWS Data Engineering Concepts and Trends
6
Section 2: Architecting and Implementing Data Lakes and Data Lake Houses
13
Section 3: The Bigger Picture: Data Analytics, Data Visualization, and Machine Learning

Understanding data sources

Over the past decade, the amount and the variety of data that gets generated each year has significantly increased. Today, industry analysts talk about the volume of global data generated in a year in terms of zettabytes (ZB), a unit of measurement equal to a billion terabytes (TB). By some estimates, a little over 1 ZB of data existed in the world in 2012, and yet by the end of 2020, there would have been an estimated 59 ZB of data consumed globally.

In our pipeline whiteboarding session (covered in Chapter 5, Architecting Data Engineering Pipelines) we identified several data sources that we wanted to ingest and transform to best enable our data consumers. For each of these data sources that is identified in a whiteboarding session, you need to develop an understanding of the variety, volume, velocity, veracity, and value of data.

Data variety

In the past decade, the variety of data that has been used in data analytics projects has greatly increased...