Book Image

Python Machine Learning Blueprints - Second Edition

By : Alexander Combs, Michael Roman
Book Image

Python Machine Learning Blueprints - Second Edition

By: Alexander Combs, Michael Roman

Overview of this book

Machine learning is transforming the way we understand and interact with the world around us. This book is the perfect guide for you to put your knowledge and skills into practice and use the Python ecosystem to cover key domains in machine learning. This second edition covers a range of libraries from the Python ecosystem, including TensorFlow and Keras, to help you implement real-world machine learning projects. The book begins by giving you an overview of machine learning with Python. With the help of complex datasets and optimized techniques, you’ll go on to understand how to apply advanced concepts and popular machine learning algorithms to real-world projects. Next, you’ll cover projects from domains such as predictive analytics to analyze the stock market and recommendation systems for GitHub repositories. In addition to this, you’ll also work on projects from the NLP domain to create a custom news feed using frameworks such as scikit-learn, TensorFlow, and Keras. Following this, you’ll learn how to build an advanced chatbot, and scale things up using PySpark. In the concluding chapters, you can look forward to exciting insights into deep learning and you'll even create an application using computer vision and neural networks. By the end of this book, you’ll be able to analyze data seamlessly and make a powerful impact through your projects.
Table of Contents (13 chapters)

Generating the importance of a feature from our model

One of the nice features of logistic regression is that it offers predictor coefficients that can tell us the relative importance of the predictor variables or features. For categorical features, a positive sign on a feature's coefficient tells us that, when present, this feature increases the probability of a positive outcome versus the baseline. For continuous features, a positive sign tells us that an increase in the value of a feature corresponds to an increase in the probability of a positive outcome. The size of the coefficient tells us the magnitude of the increase in probability.

Let's generate the importance of the feature from our model, and then we can examine the impact it has:

fv = pd.DataFrame(X_train.columns, clf.coef_.T).reset_index() 
fv.columns = ['Coef', 'Feature'] 
fv.sort_values...