Book Image

Scala Machine Learning Projects

Book Image

Scala Machine Learning Projects

Overview of this book

Machine learning has had a huge impact on academia and industry by turning data into actionable information. Scala has seen a steady rise in adoption over the past few years, especially in the fields of data science and analytics. This book is for data scientists, data engineers, and deep learning enthusiasts who have a background in complex numerical computing and want to know more hands-on machine learning application development. If you're well versed in machine learning concepts and want to expand your knowledge by delving into the practical implementation of these concepts using the power of Scala, then this book is what you need! Through 11 end-to-end projects, you will be acquainted with popular machine learning libraries such as Spark ML, H2O, DeepLearning4j, and MXNet. At the end, you will be able to use numerical computing and functional programming to carry out complex numerical tasks to develop, build, and deploy research or commercial projects in a production-ready environment.
Table of Contents (17 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Working with RNNs


In this section, we will first provide some contextual information about RNNs. Then, we will highlight some potential drawbacks of classical RNNs. Finally, we will see an improved variation of RNNs called LSTM to address the drawbacks.

Contextual information and the architecture of RNNs

Human beings don't start thinking from scratch; the human mind has so-called persistence of memory, the ability to associate the past with recent information. Traditional neural networks, instead, ignore past events. For example, in a movie scenes classifier, it's not possible for a neural network to use a past scene to classify current ones. RNNs were developed to try to solve this problem:

Figure 1: RNNs have loops

In contrast to conventional neural networks, RNNs are networks with a loop that allows the information to be persistent (Figure 1). In a neural network say, A: at some time t, input xt and outputs a value ht. So from Figure 1, we can think of an RNN as multiple copies of the same...