Book Image

Practical Data Wrangling

By : Allan Visochek
Book Image

Practical Data Wrangling

By: Allan Visochek

Overview of this book

Around 80% of time in data analysis is spent on cleaning and preparing data for analysis. This is, however, an important task, and is a prerequisite to the rest of the data analysis workflow, including visualization, analysis and reporting. Python and R are considered a popular choice of tool for data analysis, and have packages that can be best used to manipulate different kinds of data, as per your requirements. This book will show you the different data wrangling techniques, and how you can leverage the power of Python and R packages to implement them. You’ll start by understanding the data wrangling process and get a solid foundation to work with different types of data. You’ll work with different data structures and acquire and parse data from various locations. You’ll also see how to reshape the layout of data and manipulate, summarize, and join data sets. Finally, we conclude with a quick primer on accessing and processing data from databases, conducting data exploration, and storing and retrieving data quickly using databases. The book includes practical examples on each of these points using simple and real-world data sets to give you an easier understanding. By the end of the book, you’ll have a thorough understanding of all the data wrangling concepts and how to implement them in the best possible way.
Table of Contents (16 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Using Python to retrieve data from APIs


The first step is to import the requests module. I will also import the json module, which can be used to output the data retrieved from the API to a JSON file. In the beginning of get_recent_issues.py, the following code imports the requests and json modules:

import requests
import json

The next step is to create a string called base URL. The base URL is the beginning part of the URL, and can be followed by additional parameters. For now, the base URL is all you need, but in the next section you will build on it in order get data iteratively. In the following continuation of get_recent_issues.py, a string is created containing the base URL for the issues resource of the Seeclickfix API:

import requests
import json

## build a base url which is the endpoint to the api
## for this script, this is all you need
base_url = "https://seeclickfix.com/api/v2/issues?"

The next step is to replicate the action of the browser. In other words, you will need to submit...