Book Image

Practical Data Wrangling

By : Allan Visochek
Book Image

Practical Data Wrangling

By: Allan Visochek

Overview of this book

Around 80% of time in data analysis is spent on cleaning and preparing data for analysis. This is, however, an important task, and is a prerequisite to the rest of the data analysis workflow, including visualization, analysis and reporting. Python and R are considered a popular choice of tool for data analysis, and have packages that can be best used to manipulate different kinds of data, as per your requirements. This book will show you the different data wrangling techniques, and how you can leverage the power of Python and R packages to implement them. You’ll start by understanding the data wrangling process and get a solid foundation to work with different types of data. You’ll work with different data structures and acquire and parse data from various locations. You’ll also see how to reshape the layout of data and manipulate, summarize, and join data sets. Finally, we conclude with a quick primer on accessing and processing data from databases, conducting data exploration, and storing and retrieving data quickly using databases. The book includes practical examples on each of these points using simple and real-world data sets to give you an easier understanding. By the end of the book, you’ll have a thorough understanding of all the data wrangling concepts and how to implement them in the best possible way.
Table of Contents (16 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Logistical overview 


This chapter will include two demonstrations. The first of these will show you how to import data into the MongoDB database and how to update the data. This will not require any code files, but it will require some setup, which is detailed in the following subsections.

The second demonstration will show you how to interface with MongoDB from within Python, and using a Python script called process_large_data.py. The finished code is available in the code folder of the external resources.

All of the external resources are available at the following link: https://goo.gl/8S58ra.

System requirements

To follow along with the exercises, you should have at least 25 GB of disk space free. If disk space is a limiting factor, you can still follow along using a smaller version of the dataset, as I will explain in the next section. 

Data

To demonstrate working with large datasets, I've created an artificial dataset, fake_weather_data.csv, containing fake weather data since 1980. The dataset...