Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Summary


This chapter showed how to get started building and training neural networks to classify data, including image recognition and physical activity data. We looked at packages that can visualize a neural network and we created a number of models to perform classification on data with 10 different categories. Although we only used some neural network packages rather than deep learning packages, our models took a long time to train and we had issues with overfitting.

Some of the basic neural network models in this chapter took a long time to train, even though we did not use all the data available. For the MNIST data, we used approx. 8,000 rows for our binary classification task and only 6,000 rows for our multi-classification task. Even so, one model took almost an hour to train. Our deep learning models will be much more complicated and should be able to process millions of records. You can now see why specialist hardware is required for training deep learning models.

Secondly, we see...