Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
1
Part 1:Unleashing Data Wrangling with AWS
3
Part 2:Data Wrangling with AWS Tools
7
Part 3:AWS Data Management and Analysis
12
Part 4:Advanced Data Manipulation and ML Data Optimization
15
Part 5:Ensuring Data Lake Security and Monitoring

Summary

SageMaker Data Wrangler is a purpose-built tool specifically for analyzing and processing data for machine learning. It is also one of the foundational platforms for machine learning on AWS. This has been a long chapter, and although we covered several key features of Data Wrangler, there are still a few features that we left out of this book. We started by looking at how to log in to SageMaker Studio and access Data Wrangler. For the sample dataset, we used the built-in Titanic dataset that is available via a public S3 bucket. We imported this dataset into Data Wrangler via the default sampling method. We then performed EDA, first by using the built-in insights report in Data Wrangler and then by adding additional analysis, including using our custom code. Next, we defined several data transformation steps for our Data Wrangler flow to do feature engineering. For this, we used several built-in data transformations in Data Wrangler. We also looked at applying a custom data...