Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
1
Part 1:Unleashing Data Wrangling with AWS
3
Part 2:Data Wrangling with AWS Tools
7
Part 3:AWS Data Management and Analysis
12
Part 4:Advanced Data Manipulation and ML Data Optimization
15
Part 5:Ensuring Data Lake Security and Monitoring

Building blocks of AWS SDK for pandas

In this section, we will explore the building blocks for the AWS SDK for pandas library, such as Apache Arrow, pandas, and Boto3.

Arrow

Apache Arrow (https://arrow.apache.org/) is an in-memory, column-oriented (similar to the DataFrame format) data format used in many systems and programming languages for efficient in-memory analytical operations on modern hardware. Users might be already aware of the Parquet file format, which helps to store data in columnar format on disc, but when the data is loaded into memory, it’s mapped differently by different runtimes (Spark loads data into DataFrames after Parquet files are read from disk). So, when data needs to be exchanged across systems, it needs serialization/deserialization to convert it from one format to another. One advantage of using Arrow is the ability to store data in the columnar format in memory, supporting analytical operations and efficiently transferring data between different...