Book Image

Data Wrangling on AWS

By : Navnit Shukla, Sankar M, Sampat Palani
5 (1)
Book Image

Data Wrangling on AWS

5 (1)
By: Navnit Shukla, Sankar M, Sampat Palani

Overview of this book

Data wrangling is the process of cleaning, transforming, and organizing raw, messy, or unstructured data into a structured format. It involves processes such as data cleaning, data integration, data transformation, and data enrichment to ensure that the data is accurate, consistent, and suitable for analysis. Data Wrangling on AWS equips you with the knowledge to reap the full potential of AWS data wrangling tools. First, you’ll be introduced to data wrangling on AWS and will be familiarized with data wrangling services available in AWS. You’ll understand how to work with AWS Glue DataBrew, AWS data wrangler, and AWS Sagemaker. Next, you’ll discover other AWS services like Amazon S3, Redshift, Athena, and Quicksight. Additionally, you’ll explore advanced topics such as performing Pandas data operation with AWS data wrangler, optimizing ML data with AWS SageMaker, building the data warehouse with Glue DataBrew, along with security and monitoring aspects. By the end of this book, you’ll be well-equipped to perform data wrangling using AWS services.
Table of Contents (19 chapters)
1
Part 1:Unleashing Data Wrangling with AWS
3
Part 2:Data Wrangling with AWS Tools
7
Part 3:AWS Data Management and Analysis
12
Part 4:Advanced Data Manipulation and ML Data Optimization
15
Part 5:Ensuring Data Lake Security and Monitoring

What is Apache Spark?

Apache Spark is a unified analytics engine for processing big data, developed as an open source project in 2009 at the University of California, Berkeley’s AMPLab. Initially, it was created as a class project to address the limitations of the Hadoop framework in exchanging data between iterations through HDFS for machine learning use cases. The objective was to design a new framework for fast interactive processing, including machine learning and interactive data analysis, while retaining the implicit data parallelism and fault tolerance of MapReduce and HDFS from the Hadoop framework. It incorporates in-memory caching and is optimized for analytics workloads of any size.

Apache Spark was open sourced in 2010 under a BSD License, and in 2013, the project was contributed to the Apache Software Foundation. In 2014, Spark became a top-level Apache project. It has garnered over 1.7 thousand contributors and over 30K stargazers on GitHub.

According to...