Book Image

R Deep Learning Essentials

By : Joshua F. Wiley
Book Image

R Deep Learning Essentials

By: Joshua F. Wiley

Overview of this book

<p>Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures. With the superb memory management and the full integration with multi-node big data platforms, the H2O engine has become more and more popular among data scientists in the field of deep learning.</p> <p>This book will introduce you to the deep learning package H2O with R and help you understand the concepts of deep learning. We will start by setting up important deep learning packages available in R and then move towards building models related to neural networks, prediction, and deep prediction, all of this with the help of real-life examples.</p> <p>After installing the H2O package, you will learn about prediction algorithms. Moving ahead, concepts such as overfitting data, anomalous data, and deep prediction models are explained. Finally, the book will cover concepts relating to tuning and optimizing models.</p>
Table of Contents (14 chapters)
R Deep Learning Essentials
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Bibliography
Index

Summary


In this chapter, we covered what deep neural networks are in more detail, particularly how to use them to train prediction models. Even though deep feedforward neural networks can seem quite complex, they can be broken down into a sequence of layers, each of which is fairly simple, with one set of inputs and one set of outputs, along with weights and biases to map between the two.

We have also seen the improvement in predictive performance possible using deep learning. In the use case example, using linear regression alone accounted for 23% of the variance in the testing data; however, by using a deep feedforward neural network, we were able to account for 35% of the variance in the year of song release. Although still far from perfect, it is a dramatic improvement over regression, and the low performance probably has more to do with lacking the data to explain year-to-year differences than the model itself (in other words, even with the best model achieving 99% variance accounted...