Book Image

Mastering Machine Learning Algorithms

Book Image

Mastering Machine Learning Algorithms

Overview of this book

Machine learning is a subset of AI that aims to make modern-day computer systems smarter and more intelligent. The real power of machine learning resides in its algorithms, which make even the most difficult things capable of being handled by machines. However, with the advancement in the technology and requirements of data, machines will have to be smarter than they are today to meet the overwhelming data needs; mastering these algorithms and using them optimally is the need of the hour. Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the best possible manner. Ranging from Bayesian models to the MCMC algorithm to Hidden Markov models, this book will teach you how to extract features from your dataset and perform dimensionality reduction by making use of Python-based libraries such as scikit-learn v0.19.1. You will also learn how to use Keras and TensorFlow 1.x to train effective neural networks. If you are looking for a single resource to study, implement, and solve end-to-end machine learning problems and use-cases, this is the book you need.
Table of Contents (22 chapters)
Title Page
Dedication
Packt Upsell
Contributors
Preface
13
Deep Belief Networks
Index

Regularization and dropout


Overfitting is a common issue in deep models. Their extremely high capacity can often become problematic even with very large datasets because the ability to learn the structure of the training set is not always related to the ability to generalize. A deep neural network can easily become an associative memory, but the final internal configuration couldn't be the most suitable to manage samples belonging to the same distribution but was never presented during the training process. It goes without saying that this behavior is proportional to the complexity of the separation hypersurface. A linear classifier has a minimum chance to overfit, and a polynomial classifier is incredibly more prone to do it. A combination of hundreds, thousands, or more non-linear functions yields a separation hypersurface, which is beyond any possible analysis. In 1991, Hornik (in Approximation Capabilities of Multilayer Feedforward Networks,Hornik K., Neural Networks, 4/2) generalized...