Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Summary


In this chapter, we used deep learning for image classification. We discussed the different layer types that are used in image classification: convolutional layers, pooling layers, dropout, dense layers, and the softmax activation function. We saw an R-Shiny application that shows how convolutional layers perform feature engineering on image data.

We used the MXNet deep learning library in R to create a base deep learning model which got 97.1% accuracy. We then developed a CNN deep learning model based on the LeNet architecture, which achieved over 98.3% accuracy on test data. We also used a slightly harder dataset (Fashion MNIST) and created a new model that achieved over 91% accuracy. This accuracy score was better than all of the other scores that used non-deep learning algorithms. In the next chapter, we will build on what we have covered and show you how we can take advantage of pre-trained models for classification and as building blocks for new deep learning models.

In the next...