Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Practical Data Analysis Cookbook
  • Table Of Contents Toc
Practical Data Analysis Cookbook

Practical Data Analysis Cookbook

By : Drabas
4.8 (5)
close
close
Practical Data Analysis Cookbook

Practical Data Analysis Cookbook

4.8 (5)
By: Drabas

Overview of this book

Data analysis is the process of systematically applying statistical and logical techniques to describe and illustrate, condense and recap, and evaluate data. Its importance has been most visible in the sector of information and communication technologies. It is an employee asset in almost all economy sectors. This book provides a rich set of independent recipes that dive into the world of data analytics and modeling using a variety of approaches, tools, and algorithms. You will learn the basics of data handling and modeling, and will build your skills gradually toward more advanced topics such as simulations, raw text processing, social interactions analysis, and more. First, you will learn some easy-to-follow practical techniques on how to read, write, clean, reformat, explore, and understand your data—arguably the most time-consuming (and the most important) tasks for any data scientist. In the second section, different independent recipes delve into intermediate topics such as classification, clustering, predicting, and more. With the help of these easy-to-follow recipes, you will also learn techniques that can easily be expanded to solve other real-life problems such as building recommendation engines or predictive models. In the third section, you will explore more advanced topics: from the field of graph theory through natural language processing, discrete choice modeling to simulations. You will also get to expand your knowledge on identifying fraud origin with the help of a graph, scrape Internet websites, and classify movies based on their reviews. By the end of this book, you will be able to efficiently use the vast array of tools that the Python environment has to offer.
Table of Contents (13 chapters)
close
close
12
Index

Imputing missing observations

Collecting data is messy. Research data collection instruments fail, humans do not want to answer some questions in a questionnaire, or files might get corrupted; these are but a sample of reasons why a dataset might have missing observations. If we want to use the dataset, we have a couple of choices: remove the missing observations altogether or replace them with some value.

Getting ready

To execute this recipe, you will need the pandas module.

No other prerequisites are required.

How to do it…

Once again, we assume that the reader followed the earlier recipes and the csv_read DataFrame is already accessible to us. To impute missing observations, all you need to do is add this snippet to your code (the data_imput.py file):

# impute mean in place of NaNs
csv_read['price_mean'] = csv_read['price'] \
    .fillna(
        csv_read.groupby('zip')['price'].transform('mean')
    )

How it works…

The pandas' .fillna(...) method does all the heavy lifting for us. It is a DataFrame method that takes the value to be imputed as its only required parameter.

Tip

Consult the pandas documentation of .fillna(...) to see other parameters that can be passed to the method. The documentation can be found at http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html.

In our approach, we assumed that each ZIP code might have different price averages. This is why we first grouped the observations using the .groupby(...) method. It stands to reason that the prices of houses would also heavily depend on the number of rooms in a given house; if our dataset had more observations, we would have added the beds variable as well.

The .groupby(...) method returns a GroupBy object. The .transform(...) method of the GroupBy object effectively replaces all the observations within ZIP code groups with a specified value, in our case, the mean for each ZIP code.

The .fillna(...) method now simply replaces the missing observations with the mean of the ZIP code.

There's more…

Imputing the mean is not the only way to fill in the blanks. It returns a reasonable value only if the distribution of prices is symmetrical and without many outliers; if the distribution is skewed, the average is biased. A better metric of central tendency is the median. It takes one simple change in the way we presented earlier:

# impute median in place of NaNs
csv_read['price_median'] = csv_read['price'] \
    .fillna(
        csv_read.groupby('zip')['price'].transform('median')
    )
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Practical Data Analysis Cookbook
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon