Book Image

Modern Data Architecture on AWS

By : Behram Irani
5 (1)
Book Image

Modern Data Architecture on AWS

5 (1)
By: Behram Irani

Overview of this book

Many IT leaders and professionals are adept at extracting data from a particular type of database and deriving value from it. However, designing and implementing an enterprise-wide holistic data platform with purpose-built data services, all seamlessly working in tandem with the least amount of manual intervention, still poses a challenge. This book will help you explore end-to-end solutions to common data, analytics, and AI/ML use cases by leveraging AWS services. The chapters systematically take you through all the building blocks of a modern data platform, including data lakes, data warehouses, data ingestion patterns, data consumption patterns, data governance, and AI/ML patterns. Using real-world use cases, each chapter highlights the features and functionalities of numerous AWS services to enable you to create a scalable, flexible, performant, and cost-effective modern data platform. By the end of this book, you’ll be equipped with all the necessary architectural patterns and be able to apply this knowledge to efficiently build a modern data platform for your organization using AWS services.
Table of Contents (24 chapters)
1
Part 1: Foundational Data Lake
5
Part 2: Purpose-Built Services And Unified Data Access
17
Part 3: Govern, Scale, Optimize And Operationalize

Challenges with data processing platforms

Data processing or data transformation is an essential part of any data pipeline, and data engineers play a big role in making sure that the data reaches its final destination, where it’s ready for consumption. In the recent decade, the volume, velocity, and variety of data have made data processing challenging. Data turned into big data, and processing all this data in a sequential manner using powerful monolithic systems turned out to be inefficient. Data processing techniques took a positive direction when a horizontal scaling framework using Apache Hadoop was created. Hadoop was able to process big data much more efficiently using many commodities’ hardware.

Even though Hadoop was promising, the MapReduce way of processing big data was not fast enough for many organizations. The creation of Apache Spark changed the way we process data, and even today, many modern data processing systems and platforms primarily use Spark...