Book Image

R Deep Learning Essentials - Second Edition

By : Mark Hodnett, Joshua F. Wiley
Book Image

R Deep Learning Essentials - Second Edition

By: Mark Hodnett, Joshua F. Wiley

Overview of this book

Deep learning is a powerful subset of machine learning that is very successful in domains such as computer vision and natural language processing (NLP). This second edition of R Deep Learning Essentials will open the gates for you to enter the world of neural networks by building powerful deep learning models using the R ecosystem. This book will introduce you to the basic principles of deep learning and teach you to build a neural network model from scratch. As you make your way through the book, you will explore deep learning libraries, such as Keras, MXNet, and TensorFlow, and create interesting deep learning models for a variety of tasks and problems, including structured data, computer vision, text data, anomaly detection, and recommendation systems. You’ll cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud. In the concluding chapters, you will learn about the theoretical concepts of deep learning projects, such as model optimization, overfitting, and data augmentation, together with other advanced topics. By the end of this book, you will be fully prepared and able to implement deep learning concepts in your research work or projects.
Table of Contents (13 chapters)

Summary

This chapter began by showing you how to program a neural network from scratch. We demonstrated the neural network in a web application created by just using R code. We delved into how the neural network actually worked, showing how to code forward-propagation, cost functions, and backpropagation. Then we looked at how the parameters for our neural network apply to modern deep learning libraries by looking at the mx.model.FeedForward.create function from the mxnet deep learning library.

Then we covered overfitting, demonstrating several approaches to preventing overfitting, including common penalties, the Ll penalty and L2 penalty, ensembles of simpler models, and dropout, where variables and/or cases are dropped to make the model noisy. We examined the role of penalties in regression problems and neural networks. In the next chapter, we will move into deep learning and...