Book Image

R Deep Learning Projects

Book Image

R Deep Learning Projects

Overview of this book

R is a popular programming language used by statisticians and mathematicians for statistical analysis, and is popularly used for deep learning. Deep Learning, as we all know, is one of the trending topics today, and is finding practical applications in a lot of domains. This book demonstrates end-to-end implementations of five real-world projects on popular topics in deep learning such as handwritten digit recognition, traffic light detection, fraud detection, text generation, and sentiment analysis. You'll learn how to train effective neural networks in R—including convolutional neural networks, recurrent neural networks, and LSTMs—and apply them in practical scenarios. The book also highlights how neural networks can be trained using GPU capabilities. You will use popular R libraries and packages—such as MXNetR, H2O, deepnet, and more—to implement the projects. By the end of this book, you will have a better understanding of deep learning concepts and techniques and how to use them in a practical setting.
Table of Contents (11 chapters)

Word embeddings


The field of natural language processing (NLP) is advancing pretty quickly these days, as much as modern data science and artificial intelligence. 

Algorithms such as word2vec (Mikolov and others, 2013) and GloVe (Pennington and others, 2014) have been pioneers in the field, and although strictly neither of them is related to deep learning, the models trained with them are used as input data in many applications of deep learning to NLP. 

We will briefly describe word2vec and GloVe, which are perhaps the most commonly used algorithms for word embedding, although research in the intersection of neural networks and language goes back to at least Jeff Elman in the 1990s.

word2vec

The word2vec algorithm (or, rather, family of algorithms) takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words. Then we use those vectors as features for machine learning algorithms...