Book Image

Practical Data Science with Python

By : Nathan George
Book Image

Practical Data Science with Python

By: Nathan George

Overview of this book

Practical Data Science with Python teaches you core data science concepts, with real-world and realistic examples, and strengthens your grip on the basic as well as advanced principles of data preparation and storage, statistics, probability theory, machine learning, and Python programming, helping you build a solid foundation to gain proficiency in data science. The book starts with an overview of basic Python skills and then introduces foundational data science techniques, followed by a thorough explanation of the Python code needed to execute the techniques. You'll understand the code by working through the examples. The code has been broken down into small chunks (a few lines or a function at a time) to enable thorough discussion. As you progress, you will learn how to perform data analysis while exploring the functionalities of key data science Python packages, including pandas, SciPy, and scikit-learn. Finally, the book covers ethics and privacy concerns in data science and suggests resources for improving data science skills, as well as ways to stay up to date on new data science developments. By the end of the book, you should be able to comfortably use Python for basic data science projects and should have the skills to execute the data science process on any data source.
Table of Contents (30 chapters)
1
Part I - An Introduction and the Basics
4
Part II - Dealing with Data
10
Part III - Statistics for Data Science
13
Part IV - Machine Learning
21
Part V - Text Analysis and Reporting
24
Part VI - Wrapping Up
28
Other Books You May Enjoy
29
Index

Feature importance from tree-based methods

Feature importance, also called variable importance, can be calculated from tree-based methods by summing the reduction in Gini or entropy over all the trees for each variable.

So, if a particular variable is used to split the data and reduces the Gini or entropy value by a large amount, that feature is important for making predictions. This is a nice contrast to using coefficient-based feature importance from logistic or linear regression, because tree-based feature importances are non-linear. There are other ways of calculating feature importance as well, such as permutation feature importance and SHAP (SHapley Additive exPlanations).

Using H2O for feature importance

We can easily get the importances with drf.varimp(), or plot them with drf.varimp_plot(server=True). The server=True argument uses matplotlib, which allows us to do things such as directly saving the figure with plt.savefig(). The result looks like this:

...