Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
1
Part 1:Unleashing Data Wrangling with AWS
3
Part 2:Data Wrangling with AWS Tools
7
Part 3:AWS Data Management and Analysis
12
Part 4:Advanced Data Manipulation and ML Data Optimization
15
Part 5:Ensuring Data Lake Security and Monitoring

Data quality validation

Data quality validation is an important phase in data pipelines as it ensures the correctness of the data used in analyses. Without correct data, even if you use good analytical tools, the analytical insights will be incorrect. So, customers/developers need to focus more on the data quality phase to create accurate datasets for further analysis.

What is the difference between data quality and data cleansing? Some of us might be confused between data cleansing and data quality validation. In reality, there will be some overlap between the two phases, and some activities are used interchangeably:

  • Data cleansing is the phase where we clean and deduplicate data and identify generic data issues, such as splitting data for more meaningful analysis, cleansing data errors, and so on. Without cleansing, the data might not be useful for analysis efforts. For example, in a student database and results table, the score column can have non-numeric values or missing...