Book Image

Hands-On Machine Learning on Google Cloud Platform

By : Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier
Book Image

Hands-On Machine Learning on Google Cloud Platform

By: Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier

Overview of this book

Google Cloud Machine Learning Engine combines the services of Google Cloud Platform with the power and flexibility of TensorFlow. With this book, you will not only learn to build and train different complexities of machine learning models at scale but also host them in the cloud to make predictions. This book is focused on making the most of the Google Machine Learning Platform for large datasets and complex problems. You will learn from scratch how to create powerful machine learning based applications for a wide variety of problems by leveraging different data services from the Google Cloud Platform. Applications include NLP, Speech to text, Reinforcement learning, Time series, recommender systems, image classification, video content inference and many other. We will implement a wide variety of deep learning use cases and also make extensive use of data related services comprising the Google Cloud Platform ecosystem such as Firebase, Storage APIs, Datalab and so forth. This will enable you to integrate Machine Learning and data processing features into your web and mobile applications. By the end of this book, you will know the main difficulties that you may encounter and get appropriate strategies to overcome these difficulties and build efficient systems.
Table of Contents (18 chapters)
8
Creating ML Applications with Firebase

Beyond Feedforward Networks – CNN and RNN

Artificial Neural Networks (ANNs) are now extremely widespread tools in various technologies. In the simplest application, ANNs provide a feedforward architecture for connections between neurons. The feedforward neural network is the first and simplest type of ANN devised. In the presence of basic hypotheses that interact with some problems, the intrinsic unidirectional structure of feedforward networks is strongly limiting. However, it is possible to start from it and create networks in which the results of computing one unit affect the computational process of another. It is evident that algorithms that manage the dynamics of these networks must meet new convergence criteria.

In this chapter, we'll go over the main ANN architectures, such as convolutional NNs, recurrent NNs, and long short-term memory (LSTM). We'll explain...