Book Image

Supervised Machine Learning with Python

By : Taylor Smith
Book Image

Supervised Machine Learning with Python

By: Taylor Smith

Overview of this book

Supervised machine learning is used in a wide range of sectors, such as finance, online advertising, and analytics, to train systems to make pricing predictions, campaign adjustments, customer recommendations, and much more by learning from the data that is used to train it and making decisions on its own. This makes it crucial to know how a machine 'learns' under the hood. This book will guide you through the implementation and nuances of many popular supervised machine learning algorithms, and help you understand how they work. You’ll embark on this journey with a quick overview of supervised learning and see how it differs from unsupervised learning. You’ll then explore parametric models, such as linear and logistic regression, non-parametric methods, such as decision trees, and a variety of clustering techniques that facilitate decision-making and predictions. As you advance, you'll work hands-on with recommender systems, which are widely used by online companies to increase user interaction and enrich shopping potential. Finally, you’ll wrap up with a brief foray into neural networks and transfer learning. By the end of this book, you’ll be equipped with hands-on techniques and will have gained the practical know-how you need to quickly and effectively apply algorithms to solve new problems.
Table of Contents (11 chapters)
Title Page
Copyright and Credits
About Packt
Contributor
Preface
Index

Neural networks and deep learning


This is a huge topic in machine learning, so we can't cover everything in this chapter. If you've never seen a neural network before, they look like a giant spider web. The vertices of these spider webs are called neurons, or units, and they are based on an old-school linear classifier known as a perceptron. The idea is that your vector comes in, computes a dot product with a corresponding weight vector of parameters, and then gets a bias value added to it. Then, we transform it via an activation function. A perceptron, in general, can be canonically the same as logistic regression if you're using a sigmoid transformation.

 

 

When you string a whole bunch of these together, what you get is the massive web of perceptrons feeding perceptrons: this is called a multi layer perceptron, but it's also known as a neural network. As each of these perceptrons feeds the next layer, the neurons end up learning a series of nonlinear transformations in the input space,...