Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Summary


This chapter covered topics that are critical to success in deep learning projects. These included the different types of evaluation metric that can be used to evaluate the model. We looked at some issues that can come up in data preparation, including if you only have a small amount of data to train on and how to create different splits in the data, that is, how to create proper train, test, and validation datasets. We looked at two important issues that can cause the model to perform poorly in production, different data distributions, and data leakage. We saw how data augmentation can be used to improve an existing model by creating artificial data and looked at tuning hyperparameters in order to improve the performance of a deep learning model. We closed the chapter by examining a use case where we simulated a problem with different data distributions/data leakage and used LIME to interpret an existing deep learning model.

Some of the concepts in this chapter may seem somewhat...