Book Image

Hands-On Machine Learning on Google Cloud Platform

By : Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier
Book Image

Hands-On Machine Learning on Google Cloud Platform

By: Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier

Overview of this book

Google Cloud Machine Learning Engine combines the services of Google Cloud Platform with the power and flexibility of TensorFlow. With this book, you will not only learn to build and train different complexities of machine learning models at scale but also host them in the cloud to make predictions. This book is focused on making the most of the Google Machine Learning Platform for large datasets and complex problems. You will learn from scratch how to create powerful machine learning based applications for a wide variety of problems by leveraging different data services from the Google Cloud Platform. Applications include NLP, Speech to text, Reinforcement learning, Time series, recommender systems, image classification, video content inference and many other. We will implement a wide variety of deep learning use cases and also make extensive use of data related services comprising the Google Cloud Platform ecosystem such as Firebase, Storage APIs, Datalab and so forth. This will enable you to integrate Machine Learning and data processing features into your web and mobile applications. By the end of this book, you will know the main difficulties that you may encounter and get appropriate strategies to overcome these difficulties and build efficient systems.
Table of Contents (18 chapters)
8
Creating ML Applications with Firebase

Long short-term memory networks

LSTM is a particular architecture of RNN, originally conceived by Hochreiter and Schmidhuber in 1997. This type of neural network has been recently rediscovered in the context of deep learning because it is free from the problem of vanishing gradient, and in practice it offers excellent results and performance.

The vanishing gradient problem affects the training of ANNs with gradient-based learning methods. In gradient-based methods such as backpropagation, weights are adjusted proportionally to the gradient of the error. Because of the way in which the aforementioned gradients are calculated, we obtain the effect that their module decreases exponentially, proceeding towards the deepest layers. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case...