Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
Part 1:Unleashing Data Wrangling with AWS
Part 2:Data Wrangling with AWS Tools
Part 3:AWS Data Management and Analysis
Part 4:Advanced Data Manipulation and ML Data Optimization
Part 5:Ensuring Data Lake Security and Monitoring

Enriching data from multiple sources using Athena

In this section, we will explore how to enrich data using Athena SQL and also using an Athena federation setup for enriching data from other supported data sources.

Enriching data using Athena SQL joins

In the previous section, we saw various ways through which we can explore data in Amazon Athena. Now, we will focus more on ways to enrich data with additional information through Athena queries.

In this phase, we can enrich raw data further by joining with other data sources. We will continue to use the same data source that we used in earlier sections. Let us assume a scenario where we want to identify the maximum recorded temperature of this century (after the year 2000) from a specific US state (Connecticut) for a specific year (2022). We will get the data for readings from the Parquet table (noaa_data_parquet) that was created using a CTAS statement in the previous section.

We can filter and get records from Country...