Book Image

Python Data Analysis Cookbook

By : Ivan Idris
Book Image

Python Data Analysis Cookbook

By: Ivan Idris

Overview of this book

Data analysis is a rapidly evolving field and Python is a multi-paradigm programming language suitable for object-oriented application development and functional design patterns. As Python offers a range of tools and libraries for all purposes, it has slowly evolved as the primary language for data science, including topics on: data analysis, visualization, and machine learning. Python Data Analysis Cookbook focuses on reproducibility and creating production-ready systems. You will start with recipes that set the foundation for data analysis with libraries such as matplotlib, NumPy, and pandas. You will learn to create visualizations by choosing color maps and palettes then dive into statistical data analysis using distribution algorithms and correlations. You’ll then help you find your way around different data and numerical problems, get to grips with Spark and HDFS, and then set up migration scripts for web mining. In this book, you will dive deeper into recipes on spectral analysis, smoothing, and bootstrapping methods. Moving on, you will learn to rank stocks and check market efficiency, then work with metrics and clusters. You will achieve parallelism to improve system performance by using multiple threads and speeding up your code. By the end of the book, you will be capable of handling various data analysis techniques in Python and devising solutions for problem scenarios.
Table of Contents (23 chapters)
Python Data Analysis Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Glossary
Index

Preface

 

"Data analysis is Python's killer app"

 
 --Unknown

This book is the follow-up to Python Data Analysis. The obvious question is, "what does this new book add?" as Python Data Analysis is pretty great (or so I like to believe) already. This book, Python Data Analysis Cookbook, is targeted at slightly more experienced Pythonistas. A year has passed, so we are using newer versions of software and software libraries that I didn't cover in Python Data Analysis. Also, I've had time to rethink and research, and as a result I decided the following:

  • I need to have a toolbox in order to make my life easier and increase reproducibility. I called the toolbox dautil and made it available via PyPi (which can be installed with pip/easy_install).

  • My soul-searching exercise led me to believe that I need to make it easier to obtain and install the required software. I published a Docker container (pydacbk) with some of the software we need via DockerHub. You can read more about the setup in Chapter 1, Laying the Foundation for Reproducible Data Analysis, and the online chapter. The Docker container is not ideal because it grew quite large, so I had to make some tough decisions. Since the container is not really part of the book, I think it will be appropriate if you contact me directly if you have any issues. However, please keep in mind that I can't change the image drastically.

  • This book uses the IPython Notebook, which has become a standard tool for analysis. I have given some related tips in the online chapter and other books I have written.

  • I am using Python 3 with very few exceptions because Python 2 will not be maintained after 2020.

Why do you need this book?

Some people will tell you that you don't need books, just get yourself an interesting project and figure out the rest as you go along. Although there are plenty of resources out there, this may be a very frustrating road. If you want to make a delicious soup, for example, you can of course ask friends and family, search the Internet, or watch cooking shows. However, your friends and family are not available full time for you and the quality of Internet content varies. And in my humble opinion, Packt Publishing, the reviewers, and I have spent so much time and energy on this book, that I will be surprised if you don't get any value out of it.

Data analysis, data science, big data – what is the big deal?

You probably have seen Venn diagrams depicting data science as the intersection of mathematics/statistics, computer science, and domain expertise. Data analysis is timeless and was there before data science and even before computer science. You could do data analysis with a pen and paper and, in more modern times, with a pocket calculator.

Data analysis has many aspects, with goals such as making decisions or coming up with new hypotheses and questions. The hype, status, and financial rewards surrounding data science and big data remind me of the time when datawarehousing and business intelligence were the buzz words. The ultimate goal of business intelligence and datawarehousing was to build dashboards for management. This involved a lot of politics and organizational aspects, but on the technical side, it was mostly about databases. Data science, on the other hand, is not database-centric and leans heavily on machine learning. Machine learning techniques have become necessary because of the bigger volumes of data. The data growth is caused by the growth of the world population and the rise of new technologies, such as social media and mobile devices. The data growth is, in fact, probably the only trend that we can be sure of continuing. The difference between constructing dashboards and applying machine learning is analogous to the way search engines evolved.

Search engines (if you can call them that) were initially nothing more than well-organized collections of links created manually. Eventually, the automated approach won. Since, in time, more data will be created (and not destroyed), we can expect an increase in automated data analysis.

A brief of history of data analysis with Python

The history of the various Python software libraries is quite interesting. I am not a historian, so the following notes are written from my own perspective:

  • 1989: Guido van Rossum implements the very first version of Python at the CWI in the Netherlands as a Christmas "hobby" project.

  • 1995: Jim Hugunin creates Numeric—the predecessor to NumPy.

  • 1999: Pearu Peterson wrote f2py as a bridge between Fortran and Python.

  • 2000: Python 2.0 is released.

  • 2001: The SciPy library is released. Also, Numarray, a competing library of Numeric is created. Fernando Perez releases IPython, which starts out as an "afternoon hack". NLTK is released as a research project.

  • 2002: John Hunter creates the Matplotlib library.

  • 2005: NumPy is released by Travis Oliphant. NumPy, initially, is Numeric extended with features inspired by Numarray.

  • 2006: NumPy 1.0 is released. The first version of SQLAlchemy is released.

  • 2007: The scikit-learn project is initiated as a Google Summer of Code project by David Cournapeau. Cython was forked from Pyrex. Cython is later intensively used in pandas and scikit-learn to improve performance.

  • 2008: Wes McKinney starts working on pandas. Python 3.0 is released.

  • 2011: The IPython 0.12 release introduces the IPython notebook. Packt Publishing releases NumPy 1.5 Beginner's Guide.

  • 2012: Packt Publishing releases NumPy Cookbook.

  • 2013: Packt Publishing releases NumPy Beginner's Guide, Second Edition.

  • 2014: Fernando Perez announces Project Jupyter, which aims to make a language-agnostic notebook. Packt Publishing releases Learning NumPy Array and Python Data Analysis.

  • 2015: Packt Publishing releases NumPy Beginner's Guide, Third Edition and NumPy Cookbook, Second Edition.

A conjecture about the future

The future is a bright place, where an incredible amount of data lives in the Cloud and software runs on any imaginable device with an intuitive customizable interface. (I know young people who can't stop talking about how awesome their phone is and how one day we will all be programming on tablets by dragging and dropping). It seems there is a certain angst in the Python community about not being relevant in the future. Of course, the more you have invested in Python, the more it matters.

To figure out what to do, we need to know what makes Python special. A school of thought claims that Python is a glue language gluing C, Fortran, R, Java, and other languages; therefore, we just need better glue. This probably also means "borrowing" features from other languages. Personally, I like the way Python works, its flexible nature, its data structures, and the fact that it has so many libraries and features. I think the future is in more delicious syntactic sugar and just-in-time compilers. Somehow we should be able to continue writing Python code, which automatically is converted for us in concurrent (machine) code. Unseen machinery under the hood manages lower level details and sends data and instructions to CPUs, GPUs, or the Cloud. The code should be able to easily communicate with whatever storage backend we are using. Ideally, all of this magic will be just as convenient as automatic garbage collection. It may sound like an impossible "click of a button" dream, but I think it is worth pursuing.

What this book covers

Chapter 1, Laying the Foundation for Reproducible Data Analysis, is a pretty important chapter, and I recommend that you do not skip it. It explains Anaconda, Docker, unit testing, logging, and other essential elements of reproducible data analysis.

Chapter 2, Creating Attractive Data Visualizations, demonstrates how to visualize data and mentions frequently encountered pitfalls.

Chapter 3, Statistical Data Analysis and Probability, discusses statistical probability distributions and correlation between two variables.

Chapter 4, Dealing with Data and Numerical Issues, is about outliers and other common data issues. Data is almost never perfect, so a large portion of the analysis effort goes into dealing with data imperfections.

Chapter 5, Web Mining, Databases, and Big Data, is light on mathematics, but more focused on technical topics, such as databases, web scraping, and big data.

Chapter 6, Signal Processing and Timeseries, is about time series data, which is abundant and requires special techniques. Usually, we are interested in trends and seasonality or periodicity.

Chapter 7, Selecting Stocks with Financial Data Analysis, focuses on stock investing because stock price data is abundant. This is the only chapter on finance and the content should be at least partially relevant if stocks don't interest you.

Chapter 8, Text Mining and Social Network Analysis, helps you cope with the floods of textual and social media information.

Chapter 9, Ensemble Learning and Dimensionality Reduction, covers ensemble learning, classification and regression algorithms, as well as hierarchical clustering.

Chapter 10, Evaluating Classifiers, Regressors, and Clusters, evaluates the classifiers and regressors from Chapter 9, Ensemble Learning and Dimensionality Reduction, the preceding chapter.

Chapter 11, Analyzing Images, uses the OpenCV library quite a lot to analyze images.

Chapter 12, Parallelism and Performance, is about software performance and I discuss various options to improve performance, including caching and just-in-time compilers.

Appendix A, Glossary, is a brief glossary of technical concepts used throughout the book. The goal is to have a reference that is easy to look up.

Appendix B, Function Reference, is a short reference of functions meant as an extra aid in case you are temporarily unable to look up documentation.

Appendix C, Online Resources, lists resources including presentations, links to documentation, and freely available IPython notebooks and data. This appendix is available as an online chapter.

Appendix D, Tips and Tricks for Command-Line and Miscellaneous Tools, in this book we use various tools such as the IPython notebook, Docker, and Unix shell commands. I give a short list of tips that is not meant to be exhaustive. This appendix is also available as online chapter.

What you need for this book

First, you need a Python 3 distribution. I recommend the full Anaconda distribution as it comes with the majority of the software we need. I tested the code with Python 3.4 and the following packages:

  • joblib 0.8.4

  • IPython 3.2.1

  • NetworkX 1.9.1

  • NLTK 3.0.2

  • Numexpr 2.3.1

  • pandas 0.16.2

  • SciPy 0.16.0

  • seaborn 0.6.0

  • sqlalchemy 0.9.9

  • statsmodels 0.6.1

  • matplotlib 1.5.0

  • NumPy 1.10.1

  • scikit-learn 0.17

  • dautil 0.0.1a29

For some recipes, you need to install extra software, but this is explained whenever the software is required.

Who this book is for

This book is hands-on and low on theory. You should have better than beginner Python knowledge and have some knowledge of linear algebra, calculus, machine learning and statistics. Ideally, you would have read Python Data Analysis, but this is not a requirement. I also recommend the following books:

  • Building Machine Learning Systems with Python by Willi Richert and Luis Pedro Coelho, 2013

  • Learning NumPy Array by Ivan Idris, 2014

  • Learning scikit-learn: Machine Learning in Python by Guillermo Moncecchi, 2013

  • Learning SciPy for Numerical and Scientific Computing by Francisco J. Blanco-Silva, 2013

  • Matplotlib for Python Developers by Sandro Tosi, 2009

  • NumPy Beginner's Guide - Third Edition by Ivan Idris, 2015

  • NumPy Cookbook – Second Edition by Ivan Idris, 2015

  • Parallel Programming with Python by Jan Palach, 2014

  • Python Data Visualization Cookbook by Igor Milovanović, 2013

  • Python for Finance by Yuxing Yan, 2014

  • Python Text Processing with NLTK 2.0 Cookbook by Jacob Perkins, 2010

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).

To give clear instructions on how to complete a recipe, we use these sections as follows:

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Plot the data and corresponding linear fits with the Seaborn lmplot() function."

A block of code is set as follows:

population = dawb.download(indicator=[dawb.get_name('pop_grow'), dawb.get_name('gdp_pcap'),
                                    dawb.get_name('primary_education')],
                         country=countries['iso2c'], start=2014, end=2014)

population = dawb.rename_columns(population)

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

plt.figure()
plt.title('Rainy Weather vs Wind Speed')
categorical = df
categorical['RAIN'] = categorical['RAIN'] > 0
ax = sns.violinplot(x="RAIN", y="WIND_SPEED",
                         data=categorical)

Any command-line input or output is written as follows:

$ conda install -c scitools cartopy

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "In the next screenshot, the Day of year 31 text comes from the tooltip:"

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail , and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

  1. Log in or register to our website using your e-mail address and password.

  2. Hover the mouse pointer on the SUPPORT tab at the top.

  3. Click on Code Downloads & Errata.

  4. Enter the name of the book in the Search box.

  5. Select the book for which you're looking to download the code files.

  6. Choose from the drop-down menu where you purchased this book from.

  7. Click on Code Download.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

  • WinRAR / 7-Zip for Windows

  • Zipeg / iZip / UnRarX for Mac

  • 7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/PythonDataAnalysisCookbook. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at , and we will do our best to address the problem.