Book Image

R Deep Learning Projects

Book Image

R Deep Learning Projects

Overview of this book

R is a popular programming language used by statisticians and mathematicians for statistical analysis, and is popularly used for deep learning. Deep Learning, as we all know, is one of the trending topics today, and is finding practical applications in a lot of domains. This book demonstrates end-to-end implementations of five real-world projects on popular topics in deep learning such as handwritten digit recognition, traffic light detection, fraud detection, text generation, and sentiment analysis. You'll learn how to train effective neural networks in R—including convolutional neural networks, recurrent neural networks, and LSTMs—and apply them in practical scenarios. The book also highlights how neural networks can be trained using GPU capabilities. You will use popular R libraries and packages—such as MXNetR, H2O, deepnet, and more—to implement the projects. By the end of this book, you will have a better understanding of deep learning concepts and techniques and how to use them in a practical setting.
Table of Contents (11 chapters)

Reviewing methods to prevent overfitting in CNNs


Overfitting occurs when the model fits too well to the training set but is not able to generalize to unseen cases. For example, a CNN model recognizes specific traffic sign images in the training set instead of general patterns. It can be very dangerous if a self-driving car is not able to recognize sign images in ever-changing conditions, such as different weather, lighting, and angles different from what are presented in the training set. To recap, here's what we can do to reduce overfitting:

  • Collecting more training data (if possible and feasible) in order to account for various input data.
  • Using data augmentation, wherein we invent data in a smart way if time or cost does not allow us to collect more data.
  • Employing dropout, which diminishes complex co-adaptations among neighboring neurons.
  • Adding Lasso (L1) or/and Ridge (L2) penalty, which prevents model coefficients from fitting so perfectly that overfitting arises.
  • Reducing the complexity...