Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Summary


We have just finished our first mile in the R and deep learning journey! Through this chapter, we got more familiar with the important concepts of deep learning. We started with what deep learning is all about, why it is important and the recent success of applications, as well. After we were well equipped, we solved the handwritten digit using shallow neural networks, deep neural networks and CNNs in sequence, and proved that CNNs are the best suited to exploiting strong and unique features that differentiate images of different classes.

Inspired by the human visual cortex, CNNs classify images by first deriving rich representations such as edges, curves and shapes, which was demonstrated in the visualization of the outputs of convolutional layers. In addition, we verified the performance and generalization of the CNN model using early stopping as a technique to avoid overfitting. Overall, we not only covered the mechanics of CNNs, including the concepts of convolution and pooling...