Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
1
Part 1:Unleashing Data Wrangling with AWS
3
Part 2:Data Wrangling with AWS Tools
7
Part 3:AWS Data Management and Analysis
12
Part 4:Advanced Data Manipulation and ML Data Optimization
15
Part 5:Ensuring Data Lake Security and Monitoring

What is big data?

Big data refers to extremely large datasets that are too complex and diverse to be processed and analyzed using traditional data management and analytics tools. Big data often comes from multiple sources, such as sensors, social media, and e-commerce platforms, and it may include structured, semi-structured, and unstructured data.

The volume, velocity, and variety of big data present significant challenges for data management and analysis. Traditional data storage and processing systems are not designed to handle such large and complex datasets, and they may not be able to provide the performance, scalability, and flexibility required for big data applications.

To overcome these challenges, organizations have turned to big data technologies, such as Apache Hadoop, Apache Spark, and Apache Flink. These technologies are designed to support the storage, processing, and analysis of big data at scale, and they provide a distributed and parallel architecture that...