Variable selection is an important process, as it tries to make models simpler to interpret, easier to train, and free of spurious associations by eliminating variables unrelated to the output. This is one possible approach to dealing with the problem of overfitting. In general, we don't expect a model to completely fit our training data; in fact, the problem of overfitting often means that it may be detrimental to our predictive model's accuracy on unseen data if we fit our training data too well. In this section on regularization, we'll study an alternative to reducing the number of variables in order to deal with overfitting. Regularization is essentially a process of introducing an intentional bias or constraint in our training procedure that prevents our coefficients from taking large values. As this is a process that tries to shrink the coefficients, the methods we'll look at are also known as shrinkage methods.
Mastering Predictive Analytics with R
By :
Mastering Predictive Analytics with R
By:
Overview of this book
Table of Contents (19 chapters)
Mastering Predictive Analytics with R
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Free Chapter
Gearing Up for Predictive Modeling
Linear Regression
Logistic Regression
Neural Networks
Support Vector Machines
Tree-based Methods
Ensemble Methods
Probabilistic Graphical Models
Time Series Analysis
Topic Modeling
Recommendation Systems
Index
Customer Reviews