Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
Part 1:Unleashing Data Wrangling with AWS
Part 2:Data Wrangling with AWS Tools
Part 3:AWS Data Management and Analysis
Part 4:Advanced Data Manipulation and ML Data Optimization
Part 5:Ensuring Data Lake Security and Monitoring

Challenges and considerations when building a data lake on Amazon S3

When building a data lake on Amazon S3 or data lake in general, here are some challenges and considerations one should be aware of:

  • Data ingestion: The process of bringing data into a data lake can be challenging, particularly when the data comes from multiple sources with varying formats and structures. This can lead to difficulties in ensuring data quality and consistency. Additionally, handling large volumes of data can be a challenge, particularly as the data grows. Another issue is keeping schema changes consistent throughout all downstream applications.
  • Data governance: Maintaining data quality, security, and regulatory compliance can be difficult when dealing with a large volume of data in a data lake. Implementing policies and standards for data classification, quality, and retention, as well as managing access and permissions, including role-based access control (RBAC) and data encryption, can...