Book Image

Statistics for Machine Learning

By : Pratap Dangeti
Book Image

Statistics for Machine Learning

By: Pratap Dangeti

Overview of this book

Complex statistics in machine learning worry a lot of developers. Knowing statistics helps you build strong machine learning models that are optimized for a given problem statement. This book will teach you all it takes to perform the complex statistical computations that are required for machine learning. You will gain information on the statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. You will see real-world examples that discuss the statistical side of machine learning and familiarize yourself with it. You will come across programs for performing tasks such as modeling, parameter fitting, regression, classification, density collection, working with vectors, matrices, and more. By the end of the book, you will have mastered the statistics required for machine learning and will be able to apply your new skills to any sort of industry problem.
Table of Contents (16 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Optimization of neural networks


Various techniques have been used for optimizing the weights of neural networks:

  • Stochastic gradient descent (SGD)
  • Momentum
  • Nesterov accelerated gradient (NAG)
  • Adaptive gradient (Adagrad)
  • Adadelta
  • RMSprop
  • Adaptive moment estimation (Adam)
  • Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS)

In practice, Adam is a good default choice; we will be covering its working methodology in this section. If you cannot afford full batch updates, then try out L-BFGS:

Stochastic gradient descent - SGD

Gradient descent is a way to minimize an objective function J(θ) parameterized by a model's parameter θ ε Rd by updating the parameters in the opposite direction of the gradient of the objective function with regard to the parameters. The learning rate determines the size of the steps taken to reach the minimum:

  • Batch gradient descent (all training observations utilized in each iteration)
  • SGD (one observation per iteration)
  • Mini batch gradient descent (size of about 50 training observations...