Book Image

Mastering Machine Learning with scikit-learn - Second Edition

By : Gavin Hackeling
Book Image

Mastering Machine Learning with scikit-learn - Second Edition

By: Gavin Hackeling

Overview of this book

Machine learning is the buzzword bringing computer science and statistics together to build smart and efficient models. Using powerful algorithms and techniques offered by machine learning you can automate any analytical model. This book examines a variety of machine learning models including popular machine learning algorithms such as k-nearest neighbors, logistic regression, naive Bayes, k-means, decision trees, and artificial neural networks. It discusses data preprocessing, hyperparameter optimization, and ensemble methods. You will build systems that classify documents, recognize images, detect ads, and more. You will learn to use scikit-learn’s API to extract features from categorical variables, text and images; evaluate model performance, and develop an intuition for how to improve your model’s performance. By the end of this book, you will master all required concepts of scikit-learn to build efficient models at work to carry out advanced tasks with the practical approach.
Table of Contents (22 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
9
From Decision Trees to Random Forests and Other Ensemble Methods
Index

Feed-forward and feedback ANNs


ANNs are described by three components. The first is the model's architecture, or topology, which describes the types of neuron and the structure of the connections between them. Then we have the activation functions used by the artificial neurons. The third component is the learning algorithm that finds the optimal values of the weights.

There are two main types of ANN. Feed-forward neural networks are the most common type and are defined by their directed acyclic graphs. Information travels in one direction only, towards the output layer, in feed-forward neural networks. Conversely,feedbackneural networks, orrecurrent neural networks, contain cycles. The feedback cycles can represent an internal state for the network that can cause the network's behavior to change over time based on its input. Feed-forward neural networks are commonly used to learn a function to map an input to an output. For example, a feed-forward net can be used to recognize objects in...