Book Image

Modern Data Architecture on AWS

By : Behram Irani
5 (1)
Book Image

Modern Data Architecture on AWS

5 (1)
By: Behram Irani

Overview of this book

Many IT leaders and professionals are adept at extracting data from a particular type of database and deriving value from it. However, designing and implementing an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, still poses a challenge. This book will help you explore end-to-end solutions to common data, analytics, and AI/ML use cases by leveraging AWS services. The chapters systematically take you through all the building blocks of a modern data platform, including data lakes, data warehouses, data ingestion patterns, data consumption patterns, data governance, and AI/ML patterns. Using real-world use cases, each chapter highlights the features and functionalities of numerous AWS services to enable you to create a scalable, flexible, performant, and cost-effective modern data platform. By the end of this book, you’ll be equipped with all the necessary architectural patterns and be able to apply this knowledge to efficiently build a modern data platform for your organization using AWS services.
Table of Contents (24 chapters)
1
Part 1: Foundational Data Lake
5
Part 2: Purpose-Built Services And Unified Data Access
17
Part 3: Govern, Scale, Optimize And Operationalize

Data transformation using ELT patterns

There are several reasons why ELT patterns may be more appealing for certain data projects. Sometimes, you need the data available in raw format as soon as possible, sometimes, it’s the comfort level of personas using a particular programming language or tool, and other times, it’s just about cost efficiency. Amazon Redshift also provides a platform where data engineering teams can create their ELT pipelines. Let’s introduce a use case to understand this pattern.

Use case for ELT inside Amazon Redshift

GreatFin uses DMS to create a continuous data ingestion pipeline from many source data stores in Redshift. Once the data has landed in Redshift, a bunch of technical and business rules need to be applied to this data before it’s ready for consumption. Different teams are well versed in the SQL programming language and prefer to write ANSI-SQL logic to transform the data. The teams also want to save costs by not...