Book Image

Modern Data Architecture on AWS

By : Behram Irani
5 (1)
Book Image

Modern Data Architecture on AWS

5 (1)
By: Behram Irani

Overview of this book

Many IT leaders and professionals are adept at extracting data from a particular type of database and deriving value from it. However, designing and implementing an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, still poses a challenge. This book will help you explore end-to-end solutions to common data, analytics, and AI/ML use cases by leveraging AWS services. The chapters systematically take you through all the building blocks of a modern data platform, including data lakes, data warehouses, data ingestion patterns, data consumption patterns, data governance, and AI/ML patterns. Using real-world use cases, each chapter highlights the features and functionalities of numerous AWS services to enable you to create a scalable, flexible, performant, and cost-effective modern data platform. By the end of this book, you’ll be equipped with all the necessary architectural patterns and be able to apply this knowledge to efficiently build a modern data platform for your organization using AWS services.
Table of Contents (24 chapters)
1
Part 1: Foundational Data Lake
5
Part 2: Purpose-Built Services And Unified Data Access
17
Part 3: Govern, Scale, Optimize And Operationalize

Batch Data Ingestion

In this chapter, we will look at the following key topics:

  • Database migration using AWS DMS
  • SaaS data ingestion using Amazon AppFlow
  • Data ingestion using AWS Glue
  • File and storage migration

So far, we have looked at creating scalable data lakes using Amazon S3 as the storage layer and AWS Glue Data Catalog as the metadata repository. We looked at how you can create layers of a data lake in S3 so that data can be systematically managed for specific personas in your organization. The very first layer we created in S3 was the raw layer, which is meant to store the source system data without any major changes. This also means that we need to first identify all the source systems that we need data from so that we can create a centralized data lake.

The mechanism by which we bring the data over into the raw layer of the data lake in S3 is also termed data ingestion. Data ingestion can either be in batches, where we bring the data over in...