Book Image

MATLAB for Machine Learning - Second Edition

By : Giuseppe Ciaburro
Book Image

MATLAB for Machine Learning - Second Edition

By: Giuseppe Ciaburro

Overview of this book

Discover why the MATLAB programming environment is highly favored by researchers and math experts for machine learning with this guide which is designed to enhance your proficiency in both machine learning and deep learning using MATLAB, paving the way for advanced applications. By navigating the versatile machine learning tools in the MATLAB environment, you’ll learn how to seamlessly interact with the workspace. You’ll then move on to data cleansing, data mining, and analyzing various types of data in machine learning, and visualize data values on a graph. As you progress, you’ll explore various classification and regression techniques, skillfully applying them with MATLAB functions. This book teaches you the essentials of neural networks, guiding you through data fitting, pattern recognition, and cluster analysis. You’ll also explore feature selection and extraction techniques for performance improvement through dimensionality reduction. Finally, you’ll leverage MATLAB tools for deep learning and managing convolutional neural networks. By the end of the book, you’ll be able to put it all together by applying major machine learning algorithms in real-world scenarios.
Table of Contents (17 chapters)
Free Chapter
1
Part 1: Getting Started with Matlab
4
Part 2: Understanding Machine Learning Algorithms in MATLAB
9
Part 3: Machine Learning in Practice

Understanding advanced regularization techniques

Advanced regularization techniques are methods used in ML and statistical modeling to prevent overfitting and improve the generalization performance of models. Overfitting occurs when a model fits the training data too closely, capturing noise and irrelevant patterns, which leads to poor performance on unseen data. Regularization techniques introduce constraints or penalties to the model’s parameters during training to encourage simpler, more generalized models.

Understanding dropout

Dropout is a regularization technique used in NNs, particularly deep NNs (DNNs), to prevent overfitting. Overfitting occurs when an NN learns to fit the training data too closely, capturing noise and memorizing specific examples rather than generalizing from the data. Dropout is a simple yet effective method for improving a model’s generalization performance.

During the training phase, at each forward and backward pass, dropout randomly...