Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

RNN using Keras


In this section, we introduce an example using Keras. Keras is possibly the highest-level API for deep learning (again, at the time of writing, in this rapidly changing world of deep learning). This is very useful when you need to do production-ready models quite quickly, but is unfortunately sometimes not that great for learning, as everything is hidden away from you. Since, ideally, by the time you reach this section, an expert in recurrent neural networks, we can present you how to create a similar model. 

Before that, let's introduce a simple benchmark model. Something that comes to mind when we speak about the memory of a neural network is the following, well, what if I had sufficient storage to calculate the conditional probabilities and simulate text generation as a Markov process, where the state variable is the observed text? We will implement this benchmark model to see how it compares in text generation quality with recurrent neural networks.

A simple benchmark implementation...